LocalAI/backend/cpp
Ettore Di Giacinto 697c769b64
fix(llama.cpp): enable cont batching when parallel is set (#1622)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-01-21 14:59:48 +01:00
..
grpc move BUILD_GRPC_FOR_BACKEND_LLAMA logic to makefile: errors in this section now immediately fail the build (#1576) 2024-01-13 10:08:26 +01:00
llama fix(llama.cpp): enable cont batching when parallel is set (#1622) 2024-01-21 14:59:48 +01:00