* feat: enable polling configs for systems with broken fsnotify (docker volumes on windows)
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: update logging to make it clear that the config file is being polled
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
---------
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: refactor the dynamic json configs for api_keys and external_backends
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: remove commented code
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
---------
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Signed-off-by: Dave <dave@gray101.com>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Dave <dave@gray101.com>
Co-authored-by: Dave <dave@gray101.com>
* feat: migrate to alecthomas/kong for CLI
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: bring in new flag for granular log levels
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* chore: go mod tidy
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: allow loading cli flag values from ["./localai.yaml", "~/.config/localai.yaml", "/etc/localai.yaml"] in that order
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: load from .env file instead of a yaml file
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: better loading for environment files
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat(doc): add initial documentation about configuration
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: remove test log lines
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: integrate new documentation into existing pages
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: add documentation on .env files
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: cleanup some documentation table errors
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* feat: refactor CLI logic out to it's own package under core/cli
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
---------
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix(seed): generate random seed per-request if -1 is set
Also update ci with new workflows and allow the aio tests to run with an
api key
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* docs(openvino): Add OpenVINO example
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(grammar): Fix JSON mode and custom grammar
* tests(aio): add jsonmode test
* tests(aio): add functioncall test
* fix(aio): use hermes-2-pro-mistral as llm for CPU profile
* add phi-2-orange
* Initial implementation of assistants api
* Move load/save configs to utils
* Save assistant and assistantfiles config to disk.
* Add tsets for assistant api
* Fix models path spelling mistake.
* Remove personal go.mod information
---------
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
* test with gguf instead of ggml. Updates testPrompt to match? Adds debugging line to Dockerfile that I've found helpful recently.
* fix testPrompt slightly
* Sad Experiment: Test GH runner without metal?
* break apart CGO_LDFLAGS
* switch runner
* upstream llama.cpp disables Metal on Github CI!
* missed a dir from clean-tests
* CGO_LDFLAGS
* tmate failure + NO_ACCELERATE
* whisper.cpp has a metal fix
* do the exact opposite of the name of this branch, but keep it around for unrelated fixes?
* add back newlines
* add tmate to linux for testing
* update fixtures
* timeout for tmate
Certain engines requires to know during model loading
if the embedding feature has to be enabled, however, it is impractical
to have to set it to ALL the backends that supports embeddings.
There are transformers and sentencentransformers that seamelessly handle
both cases, without having this settings to be explicitly enabled.
The case sussist only for ggml-based models that needs to enable
featuresets during model loading (and thus settings `embedding` is
required), however most of the other engines does not require this.
This change disables the check done at code side, making easier to use
embeddings by not having to specify explicitly `embeddings: true`.
Part of: https://github.com/mudler/LocalAI/issues/1373
* feat(elevenlabs): map elevenlabs API support to TTS
This allows elevenlabs Clients to work automatically with LocalAI by
supporting the elevenlabs API.
The elevenlabs server endpoint is implemented such as it is wired to the
TTS endpoints.
Fixes: https://github.com/mudler/LocalAI/issues/1809
* feat(openai/tts): compat layer with openai tts
Fixes: #1276
* fix: adapt tts CLI
* fix(defaults): set better defaults for inferencing
This changeset aim to have better defaults and to properly detect when
no inference settings are provided with the model.
If not specified, we defaults to mirostat sampling, and offload all the
GPU layers (if a GPU is detected).
Related to https://github.com/mudler/LocalAI/issues/1373 and https://github.com/mudler/LocalAI/issues/1723
* Adapt tests
* Also pre-initialize default seed
* feat(intel): add diffusers support
* try to consume upstream container image
* Debug
* Manually install deps
* Map transformers/hf cache dir to modelpath if not specified
* fix(compel): update initialization, pass by all gRPC options
* fix: add dependencies, implement transformers for xpu
* base it from the oneapi image
* Add pillow
* set threads if specified when launching the API
* Skip conda install if intel
* defaults to non-intel
* ci: add to pipelines
* prepare compel only if enabled
* Skip conda install if intel
* fix cleanup
* Disable compel by default
* Install torch 2.1.0 with Intel
* Skip conda on some setups
* Detect python
* Quiet output
* Do not override system python with conda
* Prefer python3
* Fixups
* exllama2: do not install without conda (overrides pytorch version)
* exllama/exllama2: do not install if not using cuda
* Add missing dataset dependency
* Small fixups, symlink to python, add requirements
* Add neural_speed to the deps
* correctly handle model offloading
* fix: device_map == xpu
* go back at calling python, fixed at dockerfile level
* Exllama2 restricted to only nvidia gpus
* Tokenizer to xpu
* core 1
* api/openai/files fix
* core 2 - core/config
* move over core api.go and tests to the start of core/http
* move over localai specific endpoints to core/http, begin the service/endpoint split there
* refactor big chunk on the plane
* refactor chunk 2 on plane, next step: port and modify changes to request.go
* easy fixes for request.go, major changes not done yet
* lintfix
* json tag lintfix?
* gitignore and .keep files
* strange fix attempt: rename the config dir?