LocalAI/api
Ettore Di Giacinto 3c9544b023
refactor: rename llama-stable to llama-ggml (#1287)
* refactor: rename llama-stable to llama-ggml

* Makefile: get sources in sources/

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fixup path

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fixup sources

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fixups sd

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* update SD

* fixup

* fixup: create piper libdir also when not built

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix make target on linux test

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-11-18 08:18:43 +01:00
..
backend feat: allow to run parallel requests (#1290) 2023-11-16 08:20:05 +01:00
config fix(api/config): allow YAML config with .yml (#1299) 2023-11-17 22:47:30 +01:00
localai feat: queue up requests if not running parallel requests (#1296) 2023-11-16 22:20:16 +01:00
openai fix: respect OpenAI spec for response format (#1289) 2023-11-15 19:36:23 +01:00
options feat: allow to run parallel requests (#1290) 2023-11-16 08:20:05 +01:00
schema fix: respect OpenAI spec for response format (#1289) 2023-11-15 19:36:23 +01:00
api_test.go refactor: rename llama-stable to llama-ggml (#1287) 2023-11-18 08:18:43 +01:00
api.go feat(metrics): Adding initial support for prometheus metrics (#1176) 2023-10-17 18:22:53 +02:00
apt_suite_test.go feat: add CI/tests (#58) 2023-04-22 00:44:52 +02:00