Ettore Di Giacinto
|
c89271b2e4
|
feat(llama.cpp): add distributed llama.cpp inferencing (#2324)
* feat(llama.cpp): support distributed llama.cpp
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat: let tweak how chat messages are merged together
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactor
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Makefile: register to ALL_GRPC_BACKENDS
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactoring, allow disable auto-detection of backends
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* minor fixups
Signed-off-by: mudler <mudler@localai.io>
* feat: add cmd to start rpc-server from llama.cpp
Signed-off-by: mudler <mudler@localai.io>
* ci: add ccache
Signed-off-by: mudler <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: mudler <mudler@localai.io>
|
2024-05-15 01:17:02 +02:00 |
|