LocalAI/extra/grpc/exllama
Ettore Di Giacinto 8ccf5b2044
feat(speculative-sampling): allow to specify a draft model in the model config (#1052)
**Description**

This PR fixes #1013.

It adds `draft_model` and `n_draft` to the model YAML config in order to
load models with speculative sampling. This should be compatible as well
with grammars.

example:

```yaml
backend: llama                                                                                                                                                                   
context_size: 1024                                                                                                                                                                        
name: my-model-name
parameters:
  model: foo-bar
n_draft: 16                                                                                                                                                                      
draft_model: model-name
```

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-09-14 17:44:16 +02:00
..
backend_pb2_grpc.py feat(diffusers): be consistent with pipelines, support also depthimg2img (#926) 2023-08-18 22:06:24 +02:00
backend_pb2.py feat(speculative-sampling): allow to specify a draft model in the model config (#1052) 2023-09-14 17:44:16 +02:00
exllama.py feat: add --single-active-backend to allow only one backend active at the time (#925) 2023-08-19 01:49:33 +02:00