mirror of
https://github.com/mudler/LocalAI.git
synced 2024-06-07 19:40:48 +00:00
docs: Update Features->Embeddings page to reflect backend restructuring (#1325)
* Update path to sentencetransformers backend for local execution Signed-off-by: Marcus Köhler <khler.marcus@gmail.com> * Rename huggingface-embeddings -> sentencetransformers in embeddings.md for consistency with the backend structure The Dockerfile still knows the "huggingface-embeddings" backend (I assume for compatibility reasons) but uses the sentencetransformers backend under the hood anyway. I figured it would be good to update the docs to use the new naming to make it less confusing moving forward. As the docker container knows both the "huggingface-embeddings" and the "sentencetransformers" backend, this should not break anything. Signed-off-by: Marcus Köhler <khler.marcus@gmail.com> --------- Signed-off-by: Marcus Köhler <khler.marcus@gmail.com>
This commit is contained in:
parent
b1a20effde
commit
5a6a6de3d7
@ -61,23 +61,23 @@ curl http://localhost:8080/embeddings -X POST -H "Content-Type: application/json
|
||||
|
||||
## Huggingface embeddings
|
||||
|
||||
To use `sentence-formers` and models in `huggingface` you can use the `huggingface` embedding backend.
|
||||
To use `sentence-transformers` and models in `huggingface` you can use the `sentencetransformers` embedding backend.
|
||||
|
||||
```yaml
|
||||
name: text-embedding-ada-002
|
||||
backend: huggingface-embeddings
|
||||
backend: sentencetransformers
|
||||
embeddings: true
|
||||
parameters:
|
||||
model: all-MiniLM-L6-v2
|
||||
```
|
||||
|
||||
The `huggingface` backend uses Python [sentence-transformers](https://github.com/UKPLab/sentence-transformers). For a list of all pre-trained models available see here: https://github.com/UKPLab/sentence-transformers#pre-trained-models
|
||||
The `sentencetransformers` backend uses Python [sentence-transformers](https://github.com/UKPLab/sentence-transformers). For a list of all pre-trained models available see here: https://github.com/UKPLab/sentence-transformers#pre-trained-models
|
||||
|
||||
{{% notice note %}}
|
||||
|
||||
- The `huggingface` backend is an optional backend of LocalAI and uses Python. If you are running `LocalAI` from the containers you are good to go and should be already configured for use. If you are running `LocalAI` manually you must install the python dependencies (`pip install -r /path/to/LocalAI/extra/requirements`) and specify the extra backend in the `EXTERNAL_GRPC_BACKENDS` environment variable ( `EXTERNAL_GRPC_BACKENDS="huggingface-embeddings:/path/to/LocalAI/extra/grpc/huggingface/huggingface.py"` ) .
|
||||
- The `huggingface` backend does support only embeddings of text, and not of tokens. If you need to embed tokens you can use the `bert` backend or `llama.cpp`.
|
||||
- No models are required to be downloaded before using the `huggingface` backend. The models will be downloaded automatically the first time the API is used.
|
||||
- The `sentencetransformers` backend is an optional backend of LocalAI and uses Python. If you are running `LocalAI` from the containers you are good to go and should be already configured for use. If you are running `LocalAI` manually you must install the python dependencies (`pip install -r /path/to/LocalAI/extra/requirements`) and specify the extra backend in the `EXTERNAL_GRPC_BACKENDS` environment variable ( `EXTERNAL_GRPC_BACKENDS="sentencetransformers:/path/to/LocalAI/backend/python/sentencetransformers/sentencetransformers.py"` ) .
|
||||
- The `sentencetransformers` backend does support only embeddings of text, and not of tokens. If you need to embed tokens you can use the `bert` backend or `llama.cpp`.
|
||||
- No models are required to be downloaded before using the `sentencetransformers` backend. The models will be downloaded automatically the first time the API is used.
|
||||
|
||||
{{% /notice %}}
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user