LocalAI/backend/cpp
Sebastian eaf85a30f9
fix(llama.cpp): Enable parallel requests (#1616)
integrate changes from llama.cpp

Signed-off-by: Sebastian <tauven@gmail.com>
2024-01-21 09:56:14 +01:00
..
grpc move BUILD_GRPC_FOR_BACKEND_LLAMA logic to makefile: errors in this section now immediately fail the build (#1576) 2024-01-13 10:08:26 +01:00
llama fix(llama.cpp): Enable parallel requests (#1616) 2024-01-21 09:56:14 +01:00