* feat(gallery): op now supports deletion of models
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Wire things with WebUI(WIP)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* minor improvements
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactor(template): isolate and add tests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Dave <dave@gray101.com>
Co-authored-by: Dave <dave@gray101.com>
* fix(go-llama): use llama-cpp as default
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
* fix(backends): drop obsoleted lines
---------
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
* feat(intel): add diffusers support
* try to consume upstream container image
* Debug
* Manually install deps
* Map transformers/hf cache dir to modelpath if not specified
* fix(compel): update initialization, pass by all gRPC options
* fix: add dependencies, implement transformers for xpu
* base it from the oneapi image
* Add pillow
* set threads if specified when launching the API
* Skip conda install if intel
* defaults to non-intel
* ci: add to pipelines
* prepare compel only if enabled
* Skip conda install if intel
* fix cleanup
* Disable compel by default
* Install torch 2.1.0 with Intel
* Skip conda on some setups
* Detect python
* Quiet output
* Do not override system python with conda
* Prefer python3
* Fixups
* exllama2: do not install without conda (overrides pytorch version)
* exllama/exllama2: do not install if not using cuda
* Add missing dataset dependency
* Small fixups, symlink to python, add requirements
* Add neural_speed to the deps
* correctly handle model offloading
* fix: device_map == xpu
* go back at calling python, fixed at dockerfile level
* Exllama2 restricted to only nvidia gpus
* Tokenizer to xpu
* feat(tools): support Tools in the API
Co-authored-by: =?UTF-8?q?Stephan=20A=C3=9Fmus?= <stephan.assmus@sap.com>
* feat(tools): support function streaming
* Adhere to new return types when using tools instead of functions
* Keep backward compatibility with function calling
* Evaluate function names in chat templates
* Disable recovery with --debug
* Correctly stream out the entire result
* Detect when llm chooses to reply and to not perform any action in SSE
* Feedback from code review
---------
Co-authored-by: =?UTF-8?q?Stephan=20A=C3=9Fmus?= <stephan.assmus@sap.com>
* cleanup backends
* switch image to ubuntu 22.04
* adapt commands for ubuntu
* transformers cleanup
* no contrib on ubuntu
* Change test model to gguf
* ci: disable bark tests (too cpu-intensive)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* cleanup
* refinements
* use intel base image
* Makefile: Add docker targets
* Change test model
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat: allow to pass by models via args
* expose it also as an env/arg
* docs: enhancements to build/requirements
* do not display status always
* print download status
* not all mesages are debug
* feat: Allow inline templates
* feat: Allow to specify url in model config files
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
* feat: support 'huggingface://' format
* style: reuse-code from gallery
---------
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
* refactor: rename llama-stable to llama-ggml
* Makefile: get sources in sources/
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fixup path
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fixup sources
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fixups sd
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* update SD
* fixup
* fixup: create piper libdir also when not built
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix make target on linux test
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat: allow to run parallel requests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fixup
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* wip
* wip
* Make it functional
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* wip
* Small fixups
* do not inject space on role encoding, encode img at beginning of messages
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Add examples/config defaults
* Add include dir of current source dir
* cleanup
* fixes
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fixups
* Revert "fixups"
This reverts commit f1a4731cca.
* fixes
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* wip: llama.cpp c++ gRPC server
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* make it work, attach it to the build process
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* update deps
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix: add protobuf dep
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* try fix protobuf on cmake
* cmake: workarounds
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* add packages
* cmake: use fixed version of grpc
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* cmake(grpc): install locally
* install grpc
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* install required deps for grpc on debian bullseye
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* debug
* debug
* Fixups
* no need to install cmake manually
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* ci: fixup macOS
* use brew whenever possible
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* macOS fixups
* debug
* fix container build
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* workaround
* try mac
https://stackoverflow.com/questions/23905661/on-mac-g-clang-fails-to-search-usr-local-include-and-usr-local-lib-by-def
* Disable temp. arm64 docker image builds
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Co-authored-by: Aman Karmani <aman@tmm1.net>
Lays some of the groundwork for LLAMA2 compatibility as well as other future models with complex prompting schemes.
Started small refactoring in pkg/model/loader.go regarding template loading. Currently still a part of ModelLoader, but should be easy to add template loading for situations other than overall prompt templates and the new chat-specific per-message templates
Adds support for new chat-endpoint-specific, per-message templates as an alternative to the existing Role: XYZ sprintf method.
Includes a temporary prompt template as an example, since I have a few questions before we merge in the model-gallery side changes (see )
Minor debug logging changes.