LocalAI/backend/python/autogptq
Ludovic Leroux 12c0d9443e
feat: use tokenizer.apply_chat_template() in vLLM (#1990)
Use tokenizer.apply_chat_template() in vLLM

Signed-off-by: Ludovic LEROUX <ludovic@inpher.io>
2024-04-11 19:20:22 +02:00
..
autogptq.py fix(autogptq): do not use_triton with qwen-vl (#1985) 2024-04-10 10:36:10 +00:00
autogptq.yml Enhance autogptq backend to support VL models (#1860) 2024-03-26 18:48:14 +01:00
backend_pb2_grpc.py feat: use tokenizer.apply_chat_template() in vLLM (#1990) 2024-04-11 19:20:22 +02:00
backend_pb2.py feat: use tokenizer.apply_chat_template() in vLLM (#1990) 2024-04-11 19:20:22 +02:00
Makefile deps(conda): use transformers environment with autogptq (#1555) 2024-01-06 15:30:53 +01:00
README.md refactor: move backends into the backends directory (#1279) 2023-11-13 22:40:16 +01:00
run.sh deps(conda): use transformers environment with autogptq (#1555) 2024-01-06 15:30:53 +01:00

Creating a separate environment for the autogptq project

make autogptq