LocalAI/backend/python/vllm
Ettore Di Giacinto b4b21a446b
feat(conda): share envs with transformer-based backends (#1465)
* feat(conda): share env between diffusers and bark

* Detect if env already exists

* share diffusers and petals

* tests: add petals

* Use smaller model for tests with petals

* test only model load on petals

* tests(petals): run only load model tests

* Revert "test only model load on petals"

This reverts commit 111cfa97f1.

* move transformers and sentencetransformers to common env

* Share also transformers-musicgen
2023-12-21 08:35:15 +01:00
..
backend_pb2_grpc.py refactor: move backends into the backends directory (#1279) 2023-11-13 22:40:16 +01:00
backend_pb2.py feat(diffusers): update, add autopipeline, controlnet (#1432) 2023-12-13 19:20:22 +01:00
backend_vllm.py refactor: move backends into the backends directory (#1279) 2023-11-13 22:40:16 +01:00
Makefile extras: add vllm,bark,vall-e-x tests, bump diffusers (#1422) 2023-12-12 00:39:26 +01:00
README.md refactor: move backends into the backends directory (#1279) 2023-11-13 22:40:16 +01:00
run.sh refactor: move backends into the backends directory (#1279) 2023-11-13 22:40:16 +01:00
test_backend_vllm.py feat(conda): share envs with transformer-based backends (#1465) 2023-12-21 08:35:15 +01:00
test.sh extras: add vllm,bark,vall-e-x tests, bump diffusers (#1422) 2023-12-12 00:39:26 +01:00
vllm.yml refactor: move backends into the backends directory (#1279) 2023-11-13 22:40:16 +01:00

Creating a separate environment for the vllm project

make vllm