LocalAI/backend
fenfir fb0a4c5d9a
Build docker container for ROCm (#1595)
* Dockerfile changes to build for ROCm

* Adjust linker flags for ROCm

* Update conda env for diffusers and transformers to use ROCm pytorch

* Update transformers conda env for ROCm

* ci: build hipblas images

* fixup rebase

* use self-hosted

Signed-off-by: mudler <mudler@localai.io>

* specify LD_LIBRARY_PATH only when BUILD_TYPE=hipblas

---------

Signed-off-by: mudler <mudler@localai.io>
Co-authored-by: mudler <mudler@localai.io>
2024-02-16 15:08:50 +01:00
..
cpp fix(llama.cpp): disable infinite context shifting (#1704) 2024-02-13 21:17:21 +01:00
go fix: drop unused code (#1697) 2024-02-11 11:28:59 +01:00
python Build docker container for ROCm (#1595) 2024-02-16 15:08:50 +01:00
backend_grpc.pb.go transformers: correctly load automodels (#1643) 2024-01-26 00:13:21 +01:00
backend.proto transformers: correctly load automodels (#1643) 2024-01-26 00:13:21 +01:00