LocalAI/.github/workflows
Ettore Di Giacinto 1c57f8d077
feat(sycl): Add support for Intel GPUs with sycl (#1647) (#1660)
* feat(sycl): Add sycl support (#1647)

* onekit: install without prompts

* set cmake args only in grpc-server

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* cleanup

* fixup sycl source env

* Cleanup docs

* ci: runs on self-hosted

* fix typo

* bump llama.cpp

* llama.cpp: update server

* adapt to upstream changes

* adapt to upstream changes

* docs: add sycl

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-02-01 19:21:52 +01:00
..
disabled feat(conda): conda environments (#1144) 2023-11-04 15:30:32 +01:00
bump_deps.yaml chore(deps): update actions/checkout action to v4 (#1006) 2023-10-21 08:55:44 +02:00
bump_docs.yaml docs: automatically track latest versions (#1451) 2023-12-17 19:02:13 +01:00
image_build.yml ci(dockerhub): push images also to dockerhub (#1542) 2024-01-04 08:32:29 +01:00
image-pr.yml feat(sycl): Add support for Intel GPUs with sycl (#1647) (#1660) 2024-02-01 19:21:52 +01:00
image.yml feat(sycl): Add support for Intel GPUs with sycl (#1647) (#1660) 2024-02-01 19:21:52 +01:00
release.yaml feat: embedded model configurations, add popular model examples, refactoring (#1532) 2024-01-05 23:16:33 +01:00
test-extra.yml feat: add 🐸 coqui (#1489) 2023-12-24 19:38:54 +01:00
test.yml feat: embedded model configurations, add popular model examples, refactoring (#1532) 2024-01-05 23:16:33 +01:00