LocalAI/backend
Ettore Di Giacinto 0fdff26924
feat(parler-tts): Add new backend (#2027)
* feat(parler-tts): Add new backend

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* feat(parler-tts): try downgrade protobuf

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* feat(parler-tts): add parler conda env

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Revert "feat(parler-tts): try downgrade protobuf"

This reverts commit bd5941d5cfc00676b45a99f71debf3c34249cf3c.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* deps: add grpc

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix: try to gen proto with same environment

* workaround

* Revert "fix: try to gen proto with same environment"

This reverts commit 998c745e2f.

* Workaround fixup

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Co-authored-by: Dave <dave@gray101.com>
2024-04-13 18:59:21 +02:00
..
cpp fix: respect concurrency from parent build parameters when building GRPC (#2023) 2024-04-13 09:14:32 +02:00
go refactor: backend/service split, channel-based llm flow (#1963) 2024-04-13 09:45:34 +02:00
python feat(parler-tts): Add new backend (#2027) 2024-04-13 18:59:21 +02:00
backend.proto feat: use tokenizer.apply_chat_template() in vLLM (#1990) 2024-04-11 19:20:22 +02:00