LocalAI/backend/python/mamba
Ludovic Leroux 939411300a
Bump vLLM version + more options when loading models in vLLM (#1782)
* Bump vLLM version to 0.3.2

* Add vLLM model loading options

* Remove transformers-exllama

* Fix install exllama
2024-03-01 22:48:53 +01:00
..
backend_mamba.py feat(extra-backends): Improvements, adding mamba example (#1618) 2024-01-20 17:56:08 +01:00
backend_pb2_grpc.py feat: 🐍 add mamba support (#1589) 2024-01-19 23:42:50 +01:00
backend_pb2.py Bump vLLM version + more options when loading models in vLLM (#1782) 2024-03-01 22:48:53 +01:00
install.sh fix(python): pin exllama2 (#1711) 2024-02-14 21:44:12 +01:00
Makefile feat: 🐍 add mamba support (#1589) 2024-01-19 23:42:50 +01:00
README.md feat: 🐍 add mamba support (#1589) 2024-01-19 23:42:50 +01:00
run.sh feat: 🐍 add mamba support (#1589) 2024-01-19 23:42:50 +01:00
test_backend_mamba.py feat: 🐍 add mamba support (#1589) 2024-01-19 23:42:50 +01:00
test.sh feat: 🐍 add mamba support (#1589) 2024-01-19 23:42:50 +01:00

Creating a separate environment for the mamba project

make mamba