LocalAI/backend/cpp/llama
Sebastian eaf85a30f9
fix(llama.cpp): Enable parallel requests (#1616)
integrate changes from llama.cpp

Signed-off-by: Sebastian <tauven@gmail.com>
2024-01-21 09:56:14 +01:00
..
CMakeLists.txt
grpc-server.cpp fix(llama.cpp): Enable parallel requests (#1616) 2024-01-21 09:56:14 +01:00
json.hpp
Makefile