LocalAI/aio/cpu
Ettore Di Giacinto ea330d452d
models(gallery): add mistral-0.3 and command-r, update functions (#2388)
* models(gallery): add mistral-0.3 and command-r, update functions

Add also disable_parallel_new_lines to disable newlines in the JSON
output when forcing parallel tools. Some models (like mistral) might be
very sensible to that when being used for function calling.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* models(gallery): add aya-23-8b

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-23 19:16:08 +02:00
..
embeddings.yaml feat(aio): add intel profile (#1901) 2024-03-26 18:45:25 +01:00
image-gen.yaml feat(aio): add intel profile (#1901) 2024-03-26 18:45:25 +01:00
README.md feat(aio): entrypoint, update workflows (#1872) 2024-03-21 22:09:04 +01:00
rerank.yaml feat(rerankers): Add new backend, support jina rerankers API (#2121) 2024-04-25 00:19:02 +02:00
speech-to-text.yaml feat(aio): add tests, update model definitions (#1880) 2024-03-22 21:13:11 +01:00
text-to-speech.yaml feat(aio): add tests, update model definitions (#1880) 2024-03-22 21:13:11 +01:00
text-to-text.yaml models(gallery): add mistral-0.3 and command-r, update functions (#2388) 2024-05-23 19:16:08 +02:00
vision.yaml feat(aio): add intel profile (#1901) 2024-03-26 18:45:25 +01:00

AIO CPU size

Use this image with CPU-only.

Please keep using only C++ backends so the base image is as small as possible (without CUDA, cuDNN, python, etc).