LocalAI/pkg/backend/llm
Ettore Di Giacinto c62504ac92
cleanup: drop bloomz and ggllm as now supported by llama.cpp (#1217)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-10-26 07:43:31 +02:00
..
bert fix: drop racy code, refactor and group API schema (#931) 2023-08-20 14:04:45 +02:00
gpt4all fix: drop racy code, refactor and group API schema (#931) 2023-08-20 14:04:45 +02:00
langchain fix: drop racy code, refactor and group API schema (#931) 2023-08-20 14:04:45 +02:00
llama feat(speculative-sampling): allow to specify a draft model in the model config (#1052) 2023-09-14 17:44:16 +02:00
llama-stable feat: add llama-stable backend (#932) 2023-08-20 16:35:42 +02:00
rwkv Feat: rwkv improvements: (#937) 2023-08-22 18:48:06 +02:00
transformers fix: drop racy code, refactor and group API schema (#931) 2023-08-20 14:04:45 +02:00