LocalAI/pkg
Ettore Di Giacinto beb598e4f9
feat(functions): mixed JSON BNF grammars (#2328)
feat(functions): support mixed JSON BNF grammar

This PR provides new options to control how functions are extracted from
the LLM, and also provides more control on how JSON grammars can be used
(also in conjunction).

New YAML settings introduced:

- `grammar_message`: when enabled, the generated grammar can also decide
  to push strings and not only JSON objects. This allows the LLM to pick
to either respond freely or using JSON.
- `grammar_prefix`: Allows to prefix a string to the JSON grammar
  definition.
- `replace_results`: Is a map that allows to replace strings in the LLM
  result.

As an example, consider the following settings for Hermes-2-Pro-Mistral,
which allow extracting both JSON results coming from the model, and the
ones coming from the grammar:

```yaml
function:
  # disable injecting the "answer" tool
  disable_no_action: true
  # This allows the grammar to also return messages
  grammar_message: true
  # Suffix to add to the grammar
  grammar_prefix: '<tool_call>\n'
  return_name_in_function_response: true
  # Without grammar uncomment the lines below
  # Warning: this is relying only on the capability of the
  # LLM model to generate the correct function call.
  # no_grammar: true
  # json_regex_match: "(?s)<tool_call>(.*?)</tool_call>"
  replace_results:
    "<tool_call>": ""
    "\'": "\""
```

Note: To disable entirely grammars usage in the example above, uncomment the
`no_grammar` and `json_regex_match`.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-15 20:03:18 +02:00
..
assets feat(llama.cpp): add distributed llama.cpp inferencing (#2324) 2024-05-15 01:17:02 +02:00
downloader fix: reduce chmod permissions for created files and directories (#2137) 2024-04-26 00:47:06 +02:00
functions feat(functions): mixed JSON BNF grammars (#2328) 2024-05-15 20:03:18 +02:00
gallery feat(ui): prompt for chat, support vision, enhancements (#2259) 2024-05-08 00:42:34 +02:00
grpc refactor(application): introduce application global state (#2072) 2024-04-29 17:42:37 +00:00
langchain feat(llama.cpp): do not specify backends to autoload and add llama.cpp variants (#2232) 2024-05-04 17:56:12 +02:00
model feat(llama.cpp): add distributed llama.cpp inferencing (#2324) 2024-05-15 01:17:02 +02:00
stablediffusion feat: support upscaled image generation with esrgan (#509) 2023-06-05 17:21:38 +02:00
startup feat: Galleries UI (#2104) 2024-04-23 09:22:58 +02:00
store feat(stores): Vector store backend (#1795) 2024-03-22 21:14:04 +01:00
templates fix: reduce chmod permissions for created files and directories (#2137) 2024-04-26 00:47:06 +02:00
tinydream feat: add tiny dream stable diffusion support (#1283) 2023-12-24 19:27:24 +00:00
utils refactor(application): introduce application global state (#2072) 2024-04-29 17:42:37 +00:00
xsync feat(ui): prompt for chat, support vision, enhancements (#2259) 2024-05-08 00:42:34 +02:00
xsysinfo feat(startup): show CPU/GPU information with --debug (#2241) 2024-05-05 09:10:23 +02:00