* feat(functions): allow to use JSONRegexMatch unconditionally
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(functions): make json_regex_match a list
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Correct llama3-8b-instruct model file
This must be a mistake because the config tries to use a model file that is different from the one actually being downloaded.
I assumed the downloaded file is what should be used so I corrected the specified model file to that
Signed-off-by: Aleksandr Oleinikov <10602045+tannisroot@users.noreply.github.com>
feat(functions): support mixed JSON BNF grammar
This PR provides new options to control how functions are extracted from
the LLM, and also provides more control on how JSON grammars can be used
(also in conjunction).
New YAML settings introduced:
- `grammar_message`: when enabled, the generated grammar can also decide
to push strings and not only JSON objects. This allows the LLM to pick
to either respond freely or using JSON.
- `grammar_prefix`: Allows to prefix a string to the JSON grammar
definition.
- `replace_results`: Is a map that allows to replace strings in the LLM
result.
As an example, consider the following settings for Hermes-2-Pro-Mistral,
which allow extracting both JSON results coming from the model, and the
ones coming from the grammar:
```yaml
function:
# disable injecting the "answer" tool
disable_no_action: true
# This allows the grammar to also return messages
grammar_message: true
# Suffix to add to the grammar
grammar_prefix: '<tool_call>\n'
return_name_in_function_response: true
# Without grammar uncomment the lines below
# Warning: this is relying only on the capability of the
# LLM model to generate the correct function call.
# no_grammar: true
# json_regex_match: "(?s)<tool_call>(.*?)</tool_call>"
replace_results:
"<tool_call>": ""
"\'": "\""
```
Note: To disable entirely grammars usage in the example above, uncomment the
`no_grammar` and `json_regex_match`.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* auto select cpu variant
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
* remove cuda target for now
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
* fix metal
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
* fix path
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
* cuda
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
* auto select cuda
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
* update test
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
* select CUDA backend only if present
Signed-off-by: mudler <mudler@localai.io>
* ci: keep cuda bin in path
Signed-off-by: mudler <mudler@localai.io>
* Makefile: make dist now builds also cuda
Signed-off-by: mudler <mudler@localai.io>
* Keep pushing fallback in case auto-flagset/nvidia fails
There could be other reasons for which the default binary may fail. For example we might have detected an Nvidia GPU,
however the user might not have the drivers/cuda libraries installed in the system, and so it would fail to start.
We keep the fallback of llama.cpp at the end of the llama.cpp backends to try to fallback loading in case things go wrong
Signed-off-by: mudler <mudler@localai.io>
* Do not build cuda on MacOS
Signed-off-by: mudler <mudler@localai.io>
* cleanup
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
* Apply suggestions from code review
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
---------
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Signed-off-by: mudler <mudler@localai.io>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: mudler <mudler@localai.io>
When enabling grammar with functions, it might be useful to
allow more flexibility to support models that are fine-tuned against returning
function calls of the form of { "name": "function_name", "arguments" {...} }
rather then { "function": "function_name", "arguments": {..} }.
This might call out to a more generic approach later on, but for the moment being we can easily support both
as we have just to specific different types.
If needed we can expand on this later on
Signed-off-by: mudler <mudler@localai.io>
* feat: create bash library to handle install/run/test of python backends
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* chore: minor cleanup
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: remove incorrect LIMIT_TARGETS from parler-tts
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* fix: update runUnitests to handle running tests from a custom test file
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
* chore: document runUnittests
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
---------
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>