LocalAI [bot]
7e34dfdae7
⬆️ Update ggerganov/llama.cpp ( #1866 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-20 22:13:29 +00:00
LocalAI [bot]
e4bf51d5bd
⬆️ Update ggerganov/llama.cpp ( #1864 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-20 09:05:53 +01:00
LocalAI [bot]
ead61bf9d5
⬆️ Update ggerganov/llama.cpp ( #1857 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-19 00:03:17 +00:00
LocalAI [bot]
621541a92f
⬆️ Update ggerganov/whisper.cpp ( #1508 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-19 00:44:23 +01:00
Dave
ed5734ae25
test/fix: OSX Test Repair ( #1843 )
...
* test with gguf instead of ggml. Updates testPrompt to match? Adds debugging line to Dockerfile that I've found helpful recently.
* fix testPrompt slightly
* Sad Experiment: Test GH runner without metal?
* break apart CGO_LDFLAGS
* switch runner
* upstream llama.cpp disables Metal on Github CI!
* missed a dir from clean-tests
* CGO_LDFLAGS
* tmate failure + NO_ACCELERATE
* whisper.cpp has a metal fix
* do the exact opposite of the name of this branch, but keep it around for unrelated fixes?
* add back newlines
* add tmate to linux for testing
* update fixtures
* timeout for tmate
2024-03-18 19:19:43 +01:00
Ettore Di Giacinto
b202bfaaa0
deps(whisper.cpp): update, fix cublas build ( #1846 )
...
fix(whisper.cpp): Add stubs and -lcuda
2024-03-18 15:56:53 +01:00
LocalAI [bot]
0eb0ac7dd0
⬆️ Update ggerganov/llama.cpp ( #1848 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-18 08:57:58 +01:00
cryptk
020ce29cd8
fix(make): allow to parallelize jobs ( #1845 )
...
* fix: clean up Makefile dependencies to allow for parallel builds
* refactor: remove old unused backend from Makefile
* fix: finish removing legacy backend, update piper
* fix: I broke llama... I fixed llama
* feat: give the tests and builds a few threads
* fix: ensure libraries are replaced before build, add dropreplace target
* Fix image build workflows
2024-03-17 15:39:20 +01:00
LocalAI [bot]
8967ed1601
⬆️ Update ggerganov/llama.cpp ( #1840 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-16 11:25:41 +00:00
LocalAI [bot]
5826fb8e6d
⬆️ Update mudler/go-piper ( #1844 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-15 23:51:03 +00:00
Dave
db199f61da
fix: osx build default.metallib ( #1837 )
...
fix: osx build default.metallib (#1837 )
* port osx fix from refactor pr to slim pr
* manually bump llama.cpp version to unstick CI?
2024-03-15 08:18:58 +00:00
LocalAI [bot]
44adbd2c75
⬆️ Update go-skynet/go-llama.cpp ( #1835 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-14 23:06:42 +00:00
Dave
45d520f913
fix: OSX Build Files for llama.cpp ( #1836 )
...
bot ate my changes, seperate branch
2024-03-14 23:07:47 +01:00
LocalAI [bot]
f82065703d
⬆️ Update ggerganov/llama.cpp ( #1827 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-14 08:39:39 +01:00
LocalAI [bot]
5c5f07c1e7
⬆️ Update ggerganov/llama.cpp ( #1821 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-13 10:05:46 +01:00
LocalAI [bot]
8e57f4df31
⬆️ Update ggerganov/llama.cpp ( #1818 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-11 00:02:37 +01:00
LocalAI [bot]
a08cc5adbb
⬆️ Update ggerganov/llama.cpp ( #1816 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-10 09:32:09 +01:00
LocalAI [bot]
595a73fce4
⬆️ Update ggerganov/llama.cpp ( #1813 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-09 09:27:06 +01:00
LocalAI [bot]
dc919e08e8
⬆️ Update ggerganov/llama.cpp ( #1811 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-08 08:21:25 +01:00
Ettore Di Giacinto
5d1018495f
feat(intel): add diffusers/transformers support ( #1746 )
...
* feat(intel): add diffusers support
* try to consume upstream container image
* Debug
* Manually install deps
* Map transformers/hf cache dir to modelpath if not specified
* fix(compel): update initialization, pass by all gRPC options
* fix: add dependencies, implement transformers for xpu
* base it from the oneapi image
* Add pillow
* set threads if specified when launching the API
* Skip conda install if intel
* defaults to non-intel
* ci: add to pipelines
* prepare compel only if enabled
* Skip conda install if intel
* fix cleanup
* Disable compel by default
* Install torch 2.1.0 with Intel
* Skip conda on some setups
* Detect python
* Quiet output
* Do not override system python with conda
* Prefer python3
* Fixups
* exllama2: do not install without conda (overrides pytorch version)
* exllama/exllama2: do not install if not using cuda
* Add missing dataset dependency
* Small fixups, symlink to python, add requirements
* Add neural_speed to the deps
* correctly handle model offloading
* fix: device_map == xpu
* go back at calling python, fixed at dockerfile level
* Exllama2 restricted to only nvidia gpus
* Tokenizer to xpu
2024-03-07 14:37:45 +01:00
LocalAI [bot]
ad6fd7a991
⬆️ Update ggerganov/llama.cpp ( #1805 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-06 23:28:31 +01:00
LocalAI [bot]
e022b5959e
⬆️ Update mudler/go-stable-diffusion ( #1802 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-05 23:39:57 +00:00
LocalAI [bot]
db7f4955a1
⬆️ Update ggerganov/llama.cpp ( #1801 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-05 21:50:27 +00:00
LocalAI [bot]
c8e29033c2
⬆️ Update ggerganov/llama.cpp ( #1794 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-05 08:59:09 +01:00
LocalAI [bot]
d0bd961bde
⬆️ Update ggerganov/llama.cpp ( #1791 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-04 09:44:21 +01:00
LocalAI [bot]
b60a3fc879
⬆️ Update ggerganov/llama.cpp ( #1789 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-03 08:49:23 +01:00
LocalAI [bot]
daa0b8741c
⬆️ Update ggerganov/llama.cpp ( #1785 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-01 22:38:24 +00:00
Dave
1c312685aa
refactor: move remaining api packages to core ( #1731 )
...
* core 1
* api/openai/files fix
* core 2 - core/config
* move over core api.go and tests to the start of core/http
* move over localai specific endpoints to core/http, begin the service/endpoint split there
* refactor big chunk on the plane
* refactor chunk 2 on plane, next step: port and modify changes to request.go
* easy fixes for request.go, major changes not done yet
* lintfix
* json tag lintfix?
* gitignore and .keep files
* strange fix attempt: rename the config dir?
2024-03-01 16:19:53 +01:00
LocalAI [bot]
316de82f51
⬆️ Update ggerganov/llama.cpp ( #1779 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-29 22:33:30 +00:00
LocalAI [bot]
c665898652
⬆️ Update donomii/go-rwkv.cpp ( #1771 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-28 23:50:27 +00:00
LocalAI [bot]
f651a660aa
⬆️ Update ggerganov/llama.cpp ( #1772 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-28 23:02:30 +01:00
LocalAI [bot]
c7e08813a5
⬆️ Update ggerganov/llama.cpp ( #1767 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-27 23:12:51 +01:00
LocalAI [bot]
d21a6b33ab
⬆️ Update ggerganov/llama.cpp ( #1756 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-27 18:07:51 +00:00
Ettore Di Giacinto
d6cf82aba3
fix(tests): re-enable tests after code move ( #1764 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-02-27 15:04:19 +01:00
Ettore Di Giacinto
bc5f5aa538
deps(llama.cpp): update ( #1759 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-02-26 13:18:44 +01:00
Sertaç Özercan
7f72a61104
ci: add stablediffusion to release ( #1757 )
...
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
2024-02-25 23:06:18 +00:00
LocalAI [bot]
8e45d47740
⬆️ Update ggerganov/llama.cpp ( #1753 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-25 10:03:19 +01:00
LocalAI [bot]
ff88c390bb
⬆️ Update ggerganov/llama.cpp ( #1750 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-24 00:06:46 +01:00
LocalAI [bot]
d825821a22
⬆️ Update ggerganov/llama.cpp ( #1740 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-23 00:07:15 +01:00
LocalAI [bot]
6fc122fa1a
⬆️ Update ggerganov/llama.cpp ( #1705 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-22 09:33:23 +00:00
Ettore Di Giacinto
8292781045
deps(llama.cpp): update, support Gemma models ( #1734 )
...
deps(llama.cpp): update
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-02-21 17:23:38 +01:00
Ettore Di Giacinto
54ec6348fa
deps(llama.cpp): update ( #1714 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-02-21 11:35:44 +01:00
fenfir
fb0a4c5d9a
Build docker container for ROCm ( #1595 )
...
* Dockerfile changes to build for ROCm
* Adjust linker flags for ROCm
* Update conda env for diffusers and transformers to use ROCm pytorch
* Update transformers conda env for ROCm
* ci: build hipblas images
* fixup rebase
* use self-hosted
Signed-off-by: mudler <mudler@localai.io>
* specify LD_LIBRARY_PATH only when BUILD_TYPE=hipblas
---------
Signed-off-by: mudler <mudler@localai.io>
Co-authored-by: mudler <mudler@localai.io>
2024-02-16 15:08:50 +01:00
Ettore Di Giacinto
5e155fb081
fix(python): pin exllama2 ( #1711 )
...
fix(python): pin python deps
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-02-14 21:44:12 +01:00
Ettore Di Giacinto
39a6b562cf
fix(llama.cpp): downgrade to a known working version ( #1706 )
...
sycl support is broken otherwise.
See upstream issue: https://github.com/ggerganov/llama.cpp/issues/5469
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-02-14 10:28:06 +01:00
LocalAI [bot]
02f6e18adc
⬆️ Update ggerganov/llama.cpp ( #1700 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-12 21:43:33 +00:00
LocalAI [bot]
4436e62cf1
⬆️ Update ggerganov/llama.cpp ( #1698 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-12 09:56:04 +01:00
LocalAI [bot]
58cdf97361
⬆️ Update ggerganov/llama.cpp ( #1694 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-11 10:01:11 +01:00
LocalAI [bot]
ef1306f703
⬆️ Update mudler/go-stable-diffusion ( #1674 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-09 21:59:15 +00:00
LocalAI [bot]
3196967995
⬆️ Update ggerganov/llama.cpp ( #1691 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-09 21:50:34 +00:00