LocalAI/core
Ettore Di Giacinto 180cd4ccda
fix(llama.cpp-ggml): fixup max_tokens for old backend (#2094)
fix(llama.cpp-ggml): set 0 as default for `max_tokens`

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-04-21 16:34:00 +02:00
..
backend Add tensor_parallel_size setting to vllm setting items (#2085) 2024-04-20 14:37:02 +00:00
cli models(gallery): add gallery (#2078) 2024-04-20 15:22:54 +02:00
clients feat(store): add Golang client (#1977) 2024-04-16 15:54:14 +02:00
config fix(llama.cpp-ggml): fixup max_tokens for old backend (#2094) 2024-04-21 16:34:00 +02:00
http refactor(routes): split routes registration (#2077) 2024-04-21 01:19:57 +02:00
schema feat(functions): support models with no grammar, add tests (#2068) 2024-04-18 22:43:12 +02:00
services Revert #1963 (#2056) 2024-04-17 23:33:49 +02:00
startup feat: fiber logs with zerlog and add trace level (#2082) 2024-04-20 10:43:37 +02:00