LocalAI/core/config
Ettore Di Giacinto c89271b2e4
feat(llama.cpp): add distributed llama.cpp inferencing (#2324)
* feat(llama.cpp): support distributed llama.cpp

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* feat: let tweak how chat messages are merged together

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* refactor

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Makefile: register to ALL_GRPC_BACKENDS

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* refactoring, allow disable auto-detection of backends

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* minor fixups

Signed-off-by: mudler <mudler@localai.io>

* feat: add cmd to start rpc-server from llama.cpp

Signed-off-by: mudler <mudler@localai.io>

* ci: add ccache

Signed-off-by: mudler <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: mudler <mudler@localai.io>
2024-05-15 01:17:02 +02:00
..
application_config.go feat(ux): Add chat, tts, and image-gen pages to the WebUI (#2222) 2024-05-02 21:14:10 +02:00
backend_config_loader.go feat(webui): ux improvements (#2247) 2024-05-07 01:17:07 +02:00
backend_config.go feat(llama.cpp): add distributed llama.cpp inferencing (#2324) 2024-05-15 01:17:02 +02:00
config_test.go refactor: move remaining api packages to core (#1731) 2024-03-01 16:19:53 +01:00