LocalAI/pkg/backend/llm
Ettore Di Giacinto 8ccf5b2044
feat(speculative-sampling): allow to specify a draft model in the model config (#1052)
**Description**

This PR fixes #1013.

It adds `draft_model` and `n_draft` to the model YAML config in order to
load models with speculative sampling. This should be compatible as well
with grammars.

example:

```yaml
backend: llama                                                                                                                                                                   
context_size: 1024                                                                                                                                                                        
name: my-model-name
parameters:
  model: foo-bar
n_draft: 16                                                                                                                                                                      
draft_model: model-name
```

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-09-14 17:44:16 +02:00
..
bert fix: drop racy code, refactor and group API schema (#931) 2023-08-20 14:04:45 +02:00
bloomz fix: drop racy code, refactor and group API schema (#931) 2023-08-20 14:04:45 +02:00
falcon fix: drop racy code, refactor and group API schema (#931) 2023-08-20 14:04:45 +02:00
gpt4all fix: drop racy code, refactor and group API schema (#931) 2023-08-20 14:04:45 +02:00
langchain fix: drop racy code, refactor and group API schema (#931) 2023-08-20 14:04:45 +02:00
llama feat(speculative-sampling): allow to specify a draft model in the model config (#1052) 2023-09-14 17:44:16 +02:00
llama-stable feat: add llama-stable backend (#932) 2023-08-20 16:35:42 +02:00
rwkv Feat: rwkv improvements: (#937) 2023-08-22 18:48:06 +02:00
transformers fix: drop racy code, refactor and group API schema (#931) 2023-08-20 14:04:45 +02:00