* fix: use vllm AsyncLLMEngine to bring true stream
Current vLLM implementation uses the LLMEngine, which was designed for offline batch inference, which results in the streaming mode outputing all blobs at once at the end of the inference.
This PR reworks the gRPC server to use asyncio and gRPC.aio, in combination with vLLM's AsyncLLMEngine to bring true stream mode.
This PR also passes more parameters to vLLM during inference (presence_penalty, frequency_penalty, stop, ignore_eos, seed, ...).
* Remove unused import
* Dockerfile changes to build for ROCm
* Adjust linker flags for ROCm
* Update conda env for diffusers and transformers to use ROCm pytorch
* Update transformers conda env for ROCm
* ci: build hipblas images
* fixup rebase
* use self-hosted
Signed-off-by: mudler <mudler@localai.io>
* specify LD_LIBRARY_PATH only when BUILD_TYPE=hipblas
---------
Signed-off-by: mudler <mudler@localai.io>
Co-authored-by: mudler <mudler@localai.io>
* feat(refactor): refactor config and input reading
* feat(tts): read config file for TTS
* examples(kubernetes): Add simple deployment example
* examples(kubernetes): Add simple deployment for intel arc
* docs(sycl): add sycl example
* feat(tts): do not always pick a first model
* fixups to run vall-e-x on container
* Correctly resolve backend
* feat(transformers): support also text generation
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* embedded: set seed -1
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Certain backends as vall-e-x are not meant to be used as a library, so
we want to start the process in the same folder where the backend and
all the assets are fixes#1394
* feat(conda): share env between diffusers and bark
* Detect if env already exists
* share diffusers and petals
* tests: add petals
* Use smaller model for tests with petals
* test only model load on petals
* tests(petals): run only load model tests
* Revert "test only model load on petals"
This reverts commit 111cfa97f1.
* move transformers and sentencetransformers to common env
* Share also transformers-musicgen
* feat(img2vid): Initial support for img2vid
* doc(SD): fix SDXL Example
* Minor fixups for img2vid
* docs(img2img): fix example curl call
* feat(txt2vid): initial support
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
* diffusers: be retro-compatible with CUDA settings
* docs(img2vid, txt2vid): examples
* Add notice on docs
---------
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
* Use cuda in transformers if available
tensorflow probably needs a different check.
Signed-off-by: Erich Schubert <kno10@users.noreply.github.com>
* feat: expose CUDA at top level
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* tests: add to tests and create workflow for py extra backends
* doc: update note on how to use core images
---------
Signed-off-by: Erich Schubert <kno10@users.noreply.github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Co-authored-by: Erich Schubert <kno10@users.noreply.github.com>
* Update docs for new requirements.txt path
Signed-off-by: Marcus Köhler <khler.marcus@gmail.com>
* Fix typo (.PONY -> .PHONY) in python backend makefiles
Signed-off-by: Marcus Köhler <khler.marcus@gmail.com>
---------
Signed-off-by: Marcus Köhler <khler.marcus@gmail.com>
* Fix python header comments for some extra gRPC backends
When a Python script is to be executed directly via exec(3), either the platform knows how to execute
the file itself (i.e. special configuration is necessary) or the first line
contains a shebang (#!) specifying the interpreter to run it (similar to
shell scripts).
The shebang MUST be on the first line for the script to work on all platforms,
so any header comments need to be in the lines following it. Otherwise
executing these scripts as extra backends will yield an "exec format
error" message.
Changes:
* Move introductory comments below the shebang line
* Change header comment in transformers.py to refer to the correct
python module
Signed-off-by: Marcus Köhler <khler.marcus@gmail.com>
* Make header comment in ttsbark.py more specific
Signed-off-by: Marcus Köhler <khler.marcus@gmail.com>
---------
Signed-off-by: Marcus Köhler <khler.marcus@gmail.com>
* Update huggingface.py
Switch SentenceTransformer for AutoModel in order to set trust_remote_code needed to use the encode method with embeddings models like jinai-v2
Signed-off-by: Lucas Hänke de Cansino <lhc@next-boss.eu>
* feat(transformers): split in separate backend
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Lucas Hänke de Cansino <lhc@next-boss.eu>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Co-authored-by: Lucas Hänke de Cansino <lhc@next-boss.eu>
* refactor: move backends into the backends directory
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactor: move main close to implementation for every backend
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>