LocalAI/pkg/model
Ettore Di Giacinto c89271b2e4
feat(llama.cpp): add distributed llama.cpp inferencing (#2324)
* feat(llama.cpp): support distributed llama.cpp

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* feat: let tweak how chat messages are merged together

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* refactor

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Makefile: register to ALL_GRPC_BACKENDS

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* refactoring, allow disable auto-detection of backends

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* minor fixups

Signed-off-by: mudler <mudler@localai.io>

* feat: add cmd to start rpc-server from llama.cpp

Signed-off-by: mudler <mudler@localai.io>

* ci: add ccache

Signed-off-by: mudler <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: mudler <mudler@localai.io>
2024-05-15 01:17:02 +02:00
..
initializers.go feat(llama.cpp): add distributed llama.cpp inferencing (#2324) 2024-05-15 01:17:02 +02:00
loader.go fix: security scanner warning noise: error handlers part 2 (#2145) 2024-04-29 15:11:42 +02:00
loader_test.go models(gallery): add new models to the gallery (#2124) 2024-04-25 01:28:02 +02:00
model_suite_test.go tests: add template tests (#2063) 2024-04-18 10:57:24 +02:00
options.go Revert "[Refactor]: Core/API Split" (#1550) 2024-01-05 18:04:46 +01:00
process.go fix: security scanner warning noise: error handlers part 2 (#2145) 2024-04-29 15:11:42 +02:00
watchdog.go feat: first pass at improving logging (#1956) 2024-04-04 09:24:22 +02:00