mirror of
https://github.com/mudler/LocalAI.git
synced 2024-06-07 19:40:48 +00:00
8b169f1dac
Also adds embeddings and llava models Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
65 lines
2.7 KiB
YAML
65 lines
2.7 KiB
YAML
name: "llama3-instruct"
|
|
license: llama3
|
|
|
|
description: |
|
|
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
|
|
|
|
Model developers Meta
|
|
|
|
Variations Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants.
|
|
|
|
Input Models input text only.
|
|
|
|
Output Models generate text and code only.
|
|
|
|
Model Architecture Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
|
|
urls:
|
|
- https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
|
|
|
|
tags:
|
|
- llm
|
|
- gguf
|
|
- gpu
|
|
- cpu
|
|
|
|
config_file: |
|
|
mmap: true
|
|
template:
|
|
chat_message: |
|
|
<|start_header_id|>{{if eq .RoleName "assistant"}}assistant{{else if eq .RoleName "system"}}system{{else if eq .RoleName "tool"}}tool{{else if eq .RoleName "user"}}user{{end}}<|end_header_id|>
|
|
|
|
{{ if .FunctionCall -}}
|
|
Function call:
|
|
{{ else if eq .RoleName "tool" -}}
|
|
Function response:
|
|
{{ end -}}
|
|
{{ if .Content -}}
|
|
{{.Content -}}
|
|
{{ else if .FunctionCall -}}
|
|
{{ toJson .FunctionCall -}}
|
|
{{ end -}}
|
|
<|eot_id|>
|
|
function: |
|
|
<|start_header_id|>system<|end_header_id|>
|
|
|
|
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools:
|
|
<tools>
|
|
{{range .Functions}}
|
|
{'type': 'function', 'function': {'name': '{{.Name}}', 'description': '{{.Description}}', 'parameters': {{toJson .Parameters}} }}
|
|
{{end}}
|
|
</tools>
|
|
Use the following pydantic model json schema for each tool call you will make:
|
|
{'title': 'FunctionCall', 'type': 'object', 'properties': {'arguments': {'title': 'Arguments', 'type': 'object'}, 'name': {'title': 'Name', 'type': 'string'}}, 'required': ['arguments', 'name']}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
|
Function call:
|
|
chat: |
|
|
<|begin_of_text|>{{.Input }}
|
|
<|start_header_id|>assistant<|end_header_id|>
|
|
completion: |
|
|
{{.Input}}
|
|
context_size: 8192
|
|
f16: true
|
|
stopwords:
|
|
- <|im_end|>
|
|
- <dummy32000>
|
|
- "<|eot_id|>"
|