LocalAI/.github
Ettore Di Giacinto 1120847f72
feat: bump llama.cpp, add gguf support (#943)
**Description**

This PR syncs up the `llama` backend to use `gguf`
(https://github.com/go-skynet/go-llama.cpp/pull/180). It also adds
`llama-stable` to the targets so we can still load ggml. It adapts the
current tests to use the `llama-backend` for ggml and uses a `gguf`
model to run tests on the new backend.

In order to consume the new version of go-llama.cpp, it also bump go to
1.21 (images, pipelines, etc)

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-24 01:18:58 +02:00
..
ISSUE_TEMPLATE github: add ISSUE_TEMPLATE (#307) 2023-05-19 11:46:53 +02:00
workflows feat: bump llama.cpp, add gguf support (#943) 2023-08-24 01:18:58 +02:00
bump_deps.sh ci: manually update deps 2023-05-04 15:01:29 +02:00
FUNDING.yml Create FUNDING.yml (#725) 2023-07-09 13:39:00 +02:00
PULL_REQUEST_TEMPLATE.md feat: add PR template and stale configuration (#316) 2023-05-20 09:10:20 +02:00
release.yml ci: add binary releases pipelines (#358) 2023-05-23 17:12:48 +02:00
stale.yml feat: add PR template and stale configuration (#316) 2023-05-20 09:10:20 +02:00