gallery: Added some OpenVINO models (#2249)

* Added some OpenVINO models

Added Phi-3 trust_remote_code: true
Added Hermes 2 Pro Llama3
Added Multilingual-E5-base embedding model with OpenVINO acceleration (CPU and XPU)
Added all-MiniLM-L6-v2 with OpenVINO acceleration (CPU and XPU)

* Added Remote Code for phi, fixed error on Yamllint

* update openvino.yaml

I need to go to rest: today is not my day...
This commit is contained in:
fakezeta 2024-05-06 10:52:05 +02:00 committed by GitHub
parent c5475020fe
commit 169d8d21ff
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
3 changed files with 61 additions and 5 deletions

View File

@ -45,10 +45,11 @@ LocalAI will attempt to automatically load models which are not explicitly confi
| [tinydream](https://github.com/symisc/tiny-dream#tiny-dreaman-embedded-header-only-stable-diffusion-inference-c-librarypixlabiotiny-dream) | stablediffusion | no | Image | no | no | N/A |
| `coqui` | Coqui | no | Audio generation and Voice cloning | no | no | CPU/CUDA |
| `petals` | Various GPTs and quantization formats | yes | GPT | no | no | CPU/CUDA |
| `transformers` | Various GPTs and quantization formats | yes | GPT, embeddings | yes | no | CPU/CUDA |
| `transformers` | Various GPTs and quantization formats | yes | GPT, embeddings | yes | yes**** | CPU/CUDA/XPU |
Note: any backend name listed above can be used in the `backend` field of the model configuration file (See [the advanced section]({{%relref "docs/advanced" %}})).
- \* 7b ONLY
- ** doesn't seem to be accurate
- *** 7b and 40b with the `ggccv` format, for instance: https://huggingface.co/TheBloke/WizardLM-Uncensored-Falcon-40B-GGML
- *** 7b and 40b with the `ggccv` format, for instance: https://huggingface.co/TheBloke/WizardLM-Uncensored-Falcon-40B-GGML
- **** Only for CUDA and OpenVINO CPU/XPU acceleration.

View File

@ -1056,11 +1056,19 @@
urls:
- https://huggingface.co/fakezeta/Phi-3-mini-128k-instruct-ov-int8
overrides:
trust_remote_code: true
context_size: 131072
parameters:
model: fakezeta/Phi-3-mini-128k-instruct-ov-int8
stopwords:
- <|end|>
tags:
- llm
- openvino
- gpu
- phi3
- cpu
- Remote Code Enabled
- <<: *openvino
name: "openvino-starling-lm-7b-beta-openvino-int8"
urls:
@ -1069,6 +1077,12 @@
context_size: 8192
parameters:
model: fakezeta/Starling-LM-7B-beta-openvino-int8
tags:
- llm
- openvino
- gpu
- mistral
- cpu
- <<: *openvino
name: "openvino-wizardlm2"
urls:
@ -1077,6 +1091,50 @@
context_size: 8192
parameters:
model: fakezeta/Not-WizardLM-2-7B-ov-int8
- <<: *openvino
name: "openvino-hermes2pro-llama3"
urls:
- https://huggingface.co/fakezeta/Hermes-2-Pro-Llama-3-8B-ov-int8
overrides:
context_size: 8192
parameters:
model: fakezeta/Hermes-2-Pro-Llama-3-8B-ov-int8
tags:
- llm
- openvino
- gpu
- llama3
- cpu
- <<: *openvino
name: "openvino-multilingual-e5-base"
urls:
- https://huggingface.co/intfloat/multilingual-e5-base
overrides:
embeddings: true
type: OVModelForFeatureExtraction
parameters:
model: intfloat/multilingual-e5-base
tags:
- llm
- openvino
- gpu
- embedding
- cpu
- <<: *openvino
name: "openvino-all-MiniLM-L6-v2"
urls:
- https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2
overrides:
embeddings: true
type: OVModelForFeatureExtraction
parameters:
model: sentence-transformers/all-MiniLM-L6-v2
tags:
- llm
- openvino
- gpu
- embedding
- cpu
### START Embeddings
- &sentencentransformers
description: |

View File

@ -7,6 +7,3 @@ config_file: |
type: OVModelForCausalLM
template:
use_tokenizer_template: true
stopwords:
- "<|eot_id|>"
- "<|end_of_text|>"