LocalAI/backend/python
Ludovic Leroux 0135e1e3b9
fix: vllm - use AsyncLLMEngine to allow true streaming mode (#1749)
* fix: use vllm AsyncLLMEngine to bring true stream

Current vLLM implementation uses the LLMEngine, which was designed for offline batch inference, which results in the streaming mode outputing all blobs at once at the end of the inference.

This PR reworks the gRPC server to use asyncio and gRPC.aio, in combination with vLLM's AsyncLLMEngine to bring true stream mode.

This PR also passes more parameters to vLLM during inference (presence_penalty, frequency_penalty, stop, ignore_eos, seed, ...).

* Remove unused import
2024-02-24 11:48:45 +01:00
..
autogptq transformers: correctly load automodels (#1643) 2024-01-26 00:13:21 +01:00
bark transformers: correctly load automodels (#1643) 2024-01-26 00:13:21 +01:00
common-env/transformers Add TTS dependency for cuda based builds fixes #1727 (#1730) 2024-02-20 21:59:43 +01:00
coqui transformers: correctly load automodels (#1643) 2024-01-26 00:13:21 +01:00
diffusers Build docker container for ROCm (#1595) 2024-02-16 15:08:50 +01:00
exllama transformers: correctly load automodels (#1643) 2024-01-26 00:13:21 +01:00
exllama2 fix(python): pin exllama2 (#1711) 2024-02-14 21:44:12 +01:00
mamba fix(python): pin exllama2 (#1711) 2024-02-14 21:44:12 +01:00
petals transformers: correctly load automodels (#1643) 2024-01-26 00:13:21 +01:00
sentencetransformers transformers: correctly load automodels (#1643) 2024-01-26 00:13:21 +01:00
transformers transformers: correctly load automodels (#1643) 2024-01-26 00:13:21 +01:00
transformers-musicgen transformers: correctly load automodels (#1643) 2024-01-26 00:13:21 +01:00
vall-e-x fix(python): pin exllama2 (#1711) 2024-02-14 21:44:12 +01:00
vllm fix: vllm - use AsyncLLMEngine to allow true streaming mode (#1749) 2024-02-24 11:48:45 +01:00
README.md refactor: move backends into the backends directory (#1279) 2023-11-13 22:40:16 +01:00

Common commands about conda environment

Create a new empty conda environment

conda create --name <env-name> python=<your version> -y

conda create --name autogptq python=3.11 -y

To activate the environment

As of conda 4.4

conda activate autogptq

The conda version older than 4.4

source activate autogptq

Install the packages to your environment

Sometimes you need to install the packages from the conda-forge channel

By using conda

conda install <your-package-name>

conda install -c conda-forge <your package-name>

Or by using pip

pip install <your-package-name>