LocalAI/backend/cpp/llama
Ettore Di Giacinto 697c769b64
fix(llama.cpp): enable cont batching when parallel is set (#1622)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-01-21 14:59:48 +01:00
..
CMakeLists.txt Fix: Set proper Homebrew install location for x86 Macs (#1510) 2023-12-30 12:37:26 +01:00
grpc-server.cpp fix(llama.cpp): enable cont batching when parallel is set (#1622) 2024-01-21 14:59:48 +01:00
json.hpp 🔥 add LaVA support and GPT vision API, Multiple requests for llama.cpp, return JSON types (#1254) 2023-11-11 13:14:59 +01:00
Makefile update(llama.cpp): update server, correctly propagate LLAMA_VERSION (#1440) 2023-12-15 08:26:48 +01:00