LocalAI/pkg/model
Ettore Di Giacinto 3c9544b023
refactor: rename llama-stable to llama-ggml (#1287)
* refactor: rename llama-stable to llama-ggml

* Makefile: get sources in sources/

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fixup path

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fixup sources

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fixups sd

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* update SD

* fixup

* fixup: create piper libdir also when not built

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix make target on linux test

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-11-18 08:18:43 +01:00
..
initializers.go refactor: rename llama-stable to llama-ggml (#1287) 2023-11-18 08:18:43 +01:00
loader.go feat: queue up requests if not running parallel requests (#1296) 2023-11-16 22:20:16 +01:00
options.go feat: allow to run parallel requests (#1290) 2023-11-16 08:20:05 +01:00
process.go feat: queue up requests if not running parallel requests (#1296) 2023-11-16 22:20:16 +01:00