LocalAI/backend
Ettore Di Giacinto cb7512734d
transformers: correctly load automodels (#1643)
* backends(transformers): use AutoModel with LLM types

* examples: animagine-xl

* Add codellama examples
2024-01-26 00:13:21 +01:00
..
cpp fix(llama.cpp): enable cont batching when parallel is set (#1622) 2024-01-21 14:59:48 +01:00
go Revert "[Refactor]: Core/API Split" (#1550) 2024-01-05 18:04:46 +01:00
python transformers: correctly load automodels (#1643) 2024-01-26 00:13:21 +01:00
backend_grpc.pb.go transformers: correctly load automodels (#1643) 2024-01-26 00:13:21 +01:00
backend.proto transformers: correctly load automodels (#1643) 2024-01-26 00:13:21 +01:00