LocalAI/backend
Sebastian eaf85a30f9
fix(llama.cpp): Enable parallel requests (#1616)
integrate changes from llama.cpp

Signed-off-by: Sebastian <tauven@gmail.com>
2024-01-21 09:56:14 +01:00
..
cpp fix(llama.cpp): Enable parallel requests (#1616) 2024-01-21 09:56:14 +01:00
go Revert "[Refactor]: Core/API Split" (#1550) 2024-01-05 18:04:46 +01:00
python feat(extra-backends): Improvements, adding mamba example (#1618) 2024-01-20 17:56:08 +01:00
backend.proto feat: 🐍 add mamba support (#1589) 2024-01-19 23:42:50 +01:00