mirror of
https://github.com/mudler/LocalAI.git
synced 2024-06-07 19:40:48 +00:00
c89271b2e4
* feat(llama.cpp): support distributed llama.cpp Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * feat: let tweak how chat messages are merged together Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * refactor Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Makefile: register to ALL_GRPC_BACKENDS Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * refactoring, allow disable auto-detection of backends Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * minor fixups Signed-off-by: mudler <mudler@localai.io> * feat: add cmd to start rpc-server from llama.cpp Signed-off-by: mudler <mudler@localai.io> * ci: add ccache Signed-off-by: mudler <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> Signed-off-by: mudler <mudler@localai.io> |
||
---|---|---|
.. | ||
disabled | ||
bump_deps.yaml | ||
bump_docs.yaml | ||
checksum_checker.yaml | ||
dependabot_auto.yml | ||
generate_grpc_cache.yaml | ||
image_build.yml | ||
image-pr.yml | ||
image.yml | ||
labeler.yml | ||
localaibot_automerge.yml | ||
release.yaml | ||
secscan.yaml | ||
test-extra.yml | ||
test.yml | ||
update_swagger.yaml | ||
yaml-check.yml |