mirror of
https://github.com/mudler/LocalAI.git
synced 2024-06-07 19:40:48 +00:00
bc8f648a91
The default sampler on some models don't return enough candidates which leads to a false sense of randomness. Tracing back the code it looks that with the temperature sampler there might not be enough candidates to pick from, and since the seed and "randomness" take effect while picking a good candidate this yields to the same results over and over. Fixes https://github.com/mudler/LocalAI/issues/1723 by updating the examples and documentation to use mirostat instead.
41 lines
1.2 KiB
YAML
41 lines
1.2 KiB
YAML
backend: llama-cpp
|
|
context_size: 4096
|
|
f16: true
|
|
|
|
gpu_layers: 90
|
|
mmap: true
|
|
name: llava
|
|
|
|
roles:
|
|
user: "USER:"
|
|
assistant: "ASSISTANT:"
|
|
system: "SYSTEM:"
|
|
|
|
mmproj: bakllava-mmproj.gguf
|
|
parameters:
|
|
model: bakllava.gguf
|
|
temperature: 0.2
|
|
top_k: 40
|
|
top_p: 0.95
|
|
seed: -1
|
|
mirostat: 2
|
|
mirostat_eta: 1.0
|
|
mirostat_tau: 1.0
|
|
|
|
template:
|
|
chat: |
|
|
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.
|
|
{{.Input}}
|
|
ASSISTANT:
|
|
|
|
download_files:
|
|
- filename: bakllava.gguf
|
|
uri: huggingface://mys/ggml_bakllava-1/ggml-model-q4_k.gguf
|
|
- filename: bakllava-mmproj.gguf
|
|
uri: huggingface://mys/ggml_bakllava-1/mmproj-model-f16.gguf
|
|
|
|
usage: |
|
|
curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
|
|
"model": "llava",
|
|
"messages": [{"role": "user", "content": [{"type":"text", "text": "What is in the image?"}, {"type": "image_url", "image_url": {"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg" }}], "temperature": 0.9}]}'
|