LocalAI/backend/cpp/llama
2024-04-15 19:47:11 +02:00
..
CMakeLists.txt deps(llama.cpp): update, support Gemma models (#1734) 2024-02-21 17:23:38 +01:00
grpc-server.cpp feat(grpc): return consumed token count and update response accordingly (#2035) 2024-04-15 19:47:11 +02:00
json.hpp
Makefile test/fix: OSX Test Repair (#1843) 2024-03-18 19:19:43 +01:00
utils.hpp feat(sycl): Add support for Intel GPUs with sycl (#1647) (#1660) 2024-02-01 19:21:52 +01:00