LocalAI/backend
Ettore Di Giacinto 697c769b64
fix(llama.cpp): enable cont batching when parallel is set (#1622)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-01-21 14:59:48 +01:00
..
cpp fix(llama.cpp): enable cont batching when parallel is set (#1622) 2024-01-21 14:59:48 +01:00
go Revert "[Refactor]: Core/API Split" (#1550) 2024-01-05 18:04:46 +01:00
python feat(extra-backends): Improvements, adding mamba example (#1618) 2024-01-20 17:56:08 +01:00
backend.proto feat: 🐍 add mamba support (#1589) 2024-01-19 23:42:50 +01:00