LocalAI/extra/grpc/vllm
Ettore Di Giacinto 0eae727366
🔥 add LaVA support and GPT vision API, Multiple requests for llama.cpp, return JSON types (#1254)
* wip

* wip

* Make it functional

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* wip

* Small fixups

* do not inject space on role encoding, encode img at beginning of messages

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add examples/config defaults

* Add include dir of current source dir

* cleanup

* fixes

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fixups

* Revert "fixups"

This reverts commit f1a4731cca.

* fixes

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-11-11 13:14:59 +01:00
..
backend_pb2_grpc.py feat(vllm): Initial vllm backend implementation 2023-09-09 17:03:23 +02:00
backend_pb2.py 🔥 add LaVA support and GPT vision API, Multiple requests for llama.cpp, return JSON types (#1254) 2023-11-11 13:14:59 +01:00
backend_vllm.py feat(conda): conda environments (#1144) 2023-11-04 15:30:32 +01:00
Makefile feat(conda): conda environments (#1144) 2023-11-04 15:30:32 +01:00
README.md feat(conda): conda environments (#1144) 2023-11-04 15:30:32 +01:00
run.sh feat(conda): conda environments (#1144) 2023-11-04 15:30:32 +01:00
test_backend_vllm.py feat(conda): conda environments (#1144) 2023-11-04 15:30:32 +01:00
vllm.yml feat(conda): conda environments (#1144) 2023-11-04 15:30:32 +01:00

Creating a separate environment for the vllm project

make vllm