LocalAI/backend/python/exllama/Makefile
Ludovic Leroux 939411300a
Bump vLLM version + more options when loading models in vLLM (#1782)
* Bump vLLM version to 0.3.2

* Add vLLM model loading options

* Remove transformers-exllama

* Fix install exllama
2024-03-01 22:48:53 +01:00

12 lines
180 B
Makefile

export CONDA_ENV_PATH = "exllama.yml"
.PHONY: exllama
exllama:
bash install.sh ${CONDA_ENV_PATH}
.PHONY: run
run:
@echo "Running exllama..."
bash run.sh
@echo "exllama run."