LocalAI/extra/grpc/exllama
Ettore Di Giacinto 0eae727366
🔥 add LaVA support and GPT vision API, Multiple requests for llama.cpp, return JSON types (#1254)
* wip

* wip

* Make it functional

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* wip

* Small fixups

* do not inject space on role encoding, encode img at beginning of messages

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add examples/config defaults

* Add include dir of current source dir

* cleanup

* fixes

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fixups

* Revert "fixups"

This reverts commit f1a4731cca.

* fixes

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-11-11 13:14:59 +01:00
..
backend_pb2_grpc.py
backend_pb2.py 🔥 add LaVA support and GPT vision API, Multiple requests for llama.cpp, return JSON types (#1254) 2023-11-11 13:14:59 +01:00
exllama.py feat(python-grpc): allow to set max workers with PYTHON_GRPC_MAX_WORKERS (#1081) 2023-09-19 21:30:39 +02:00
exllama.yml feat(conda): conda environments (#1144) 2023-11-04 15:30:32 +01:00
Makefile feat(conda): conda environments (#1144) 2023-11-04 15:30:32 +01:00
README.md feat(conda): conda environments (#1144) 2023-11-04 15:30:32 +01:00
run.sh feat(conda): conda environments (#1144) 2023-11-04 15:30:32 +01:00

Creating a separate environment for the exllama project

make exllama