LocalAI/api
Ettore Di Giacinto 8ccf5b2044
feat(speculative-sampling): allow to specify a draft model in the model config (#1052)
**Description**

This PR fixes #1013.

It adds `draft_model` and `n_draft` to the model YAML config in order to
load models with speculative sampling. This should be compatible as well
with grammars.

example:

```yaml
backend: llama                                                                                                                                                                   
context_size: 1024                                                                                                                                                                        
name: my-model-name
parameters:
  model: foo-bar
n_draft: 16                                                                                                                                                                      
draft_model: model-name
```

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-09-14 17:44:16 +02:00
..
backend feat(speculative-sampling): allow to specify a draft model in the model config (#1052) 2023-09-14 17:44:16 +02:00
config feat(speculative-sampling): allow to specify a draft model in the model config (#1052) 2023-09-14 17:44:16 +02:00
localai feat: Model Gallery Endpoint Refactor / Mutable Galleries Endpoints (#991) 2023-09-02 09:00:44 +02:00
openai fix: drop racy code, refactor and group API schema (#931) 2023-08-20 14:04:45 +02:00
options feat: Model Gallery Endpoint Refactor / Mutable Galleries Endpoints (#991) 2023-09-02 09:00:44 +02:00
schema fix: drop racy code, refactor and group API schema (#931) 2023-08-20 14:04:45 +02:00
api_test.go fix(deps): update go-llama.cpp (#980) 2023-08-30 23:01:55 +02:00
api.go feat: Model Gallery Endpoint Refactor / Mutable Galleries Endpoints (#991) 2023-09-02 09:00:44 +02:00
apt_suite_test.go feat: add CI/tests (#58) 2023-04-22 00:44:52 +02:00