LocalAI/backend
Dionysius 441e2965ff
move BUILD_GRPC_FOR_BACKEND_LLAMA logic to makefile: errors in this section now immediately fail the build (#1576)
* move BUILD_GRPC_FOR_BACKEND_LLAMA option to makefile

* review: oversight, fixup cmake_args

Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Signed-off-by: Dionysius <1341084+dionysius@users.noreply.github.com>

---------

Signed-off-by: Dionysius <1341084+dionysius@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-01-13 10:08:26 +01:00
..
cpp move BUILD_GRPC_FOR_BACKEND_LLAMA logic to makefile: errors in this section now immediately fail the build (#1576) 2024-01-13 10:08:26 +01:00
go Revert "[Refactor]: Core/API Split" (#1550) 2024-01-05 18:04:46 +01:00
python feat: more embedded models, coqui fixes, add model usage and description (#1556) 2024-01-08 00:37:02 +01:00
backend.proto feat(diffusers): update, add autopipeline, controlnet (#1432) 2023-12-13 19:20:22 +01:00