LocalAI/backend/cpp/llama
Ettore Di Giacinto 54ec6348fa
deps(llama.cpp): update (#1714)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-02-21 11:35:44 +01:00
..
CMakeLists.txt feat(sycl): Add support for Intel GPUs with sycl (#1647) (#1660) 2024-02-01 19:21:52 +01:00
grpc-server.cpp deps(llama.cpp): update (#1714) 2024-02-21 11:35:44 +01:00
json.hpp 🔥 add LaVA support and GPT vision API, Multiple requests for llama.cpp, return JSON types (#1254) 2023-11-11 13:14:59 +01:00
Makefile feat(sycl): Add support for Intel GPUs with sycl (#1647) (#1660) 2024-02-01 19:21:52 +01:00
utils.hpp feat(sycl): Add support for Intel GPUs with sycl (#1647) (#1660) 2024-02-01 19:21:52 +01:00