mirror of
https://github.com/mudler/LocalAI.git
synced 2024-06-07 19:40:48 +00:00
09e5d9007b
* move downloader out * separate startup functions for preloading configuration files * docs: add popular model examples Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * shorteners * Add llava * Add mistral-openorca * Better link to build section * docs: update * fixup * Drop code dups * Minor fixups * Apply suggestions from code review Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com> * ci: try to cache gRPC build during tests Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * ci: do not build all images for tests, just necessary * ci: cache gRPC also in release pipeline * fixes * Update model_preload_test.go Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
31 lines
679 B
YAML
31 lines
679 B
YAML
backend: llama-cpp
|
|
context_size: 4096
|
|
f16: true
|
|
|
|
gpu_layers: 90
|
|
mmap: true
|
|
name: llava
|
|
|
|
roles:
|
|
user: "USER:"
|
|
assistant: "ASSISTANT:"
|
|
system: "SYSTEM:"
|
|
|
|
mmproj: bakllava-mmproj.gguf
|
|
parameters:
|
|
model: bakllava.gguf
|
|
temperature: 0.2
|
|
top_k: 40
|
|
top_p: 0.95
|
|
|
|
template:
|
|
chat: |
|
|
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.
|
|
{{.Input}}
|
|
ASSISTANT:
|
|
|
|
download_files:
|
|
- filename: bakllava.gguf
|
|
uri: huggingface://mys/ggml_bakllava-1/ggml-model-q4_k.gguf
|
|
- filename: bakllava-mmproj.gguf
|
|
uri: huggingface://mys/ggml_bakllava-1/mmproj-model-f16.gguf |