Cleaning up examples/ models and starter .env files (#1124)

Closes https://github.com/go-skynet/LocalAI/issues/1066 and
https://github.com/go-skynet/LocalAI/issues/1065

Standardizes all `examples/`:
- Models in one place (other than `rwkv`, which was one-offy)
- Env files as `.env.example` with `cp`
    - Also standardizes comments and links docs
This commit is contained in:
James Braza 2023-10-02 09:14:10 -07:00 committed by GitHub
parent c223364816
commit e34b5f0119
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
34 changed files with 48 additions and 99 deletions

View File

@ -1,5 +1,9 @@
# CPU .env docs: https://localai.io/howtos/easy-setup-docker-cpu/
# GPU .env docs: https://localai.io/howtos/easy-setup-docker-gpu/
OPENAI_API_KEY=sk---anystringhere
OPENAI_API_BASE=http://api:8080/v1
# Models to preload at start
# Here we configure gpt4all as gpt-3.5-turbo and bert as embeddings
PRELOAD_MODELS=[{"url": "github:go-skynet/model-gallery/gpt4all-j.yaml", "name": "gpt-3.5-turbo"}, { "url": "github:go-skynet/model-gallery/bert-embeddings.yaml", "name": "text-embedding-ada-002"}]
# Here we configure gpt4all as gpt-3.5-turbo and bert as embeddings,
# see other options in the model gallery at https://github.com/go-skynet/model-gallery
PRELOAD_MODELS=[{"url": "github:go-skynet/model-gallery/gpt4all-j.yaml", "name": "gpt-3.5-turbo"}, { "url": "github:go-skynet/model-gallery/bert-embeddings.yaml", "name": "text-embedding-ada-002"}]

View File

@ -10,12 +10,16 @@ git clone https://github.com/go-skynet/LocalAI
cd LocalAI/examples/autoGPT
cp -rfv .env.example .env
# Edit the .env file to set a different model by editing `PRELOAD_MODELS`.
vim .env
docker-compose run --rm auto-gpt
```
Note: The example automatically downloads the `gpt4all` model as it is under a permissive license. The GPT4All model does not seem to be enough to run AutoGPT. WizardLM-7b-uncensored seems to perform better (with `f16: true`).
See the `.env` configuration file to set a different model with the [model-gallery](https://github.com/go-skynet/model-gallery) by editing `PRELOAD_MODELS`.
## Without docker

View File

@ -0,0 +1 @@
../models

View File

@ -1,3 +1,6 @@
# CPU .env docs: https://localai.io/howtos/easy-setup-docker-cpu/
# GPU .env docs: https://localai.io/howtos/easy-setup-docker-gpu/
OPENAI_API_KEY=x
DISCORD_BOT_TOKEN=x
DISCORD_CLIENT_ID=x

View File

@ -1 +1 @@
../chatbot-ui/models/
../models

View File

@ -1,7 +1,11 @@
# CPU .env docs: https://localai.io/howtos/easy-setup-docker-cpu/
# GPU .env docs: https://localai.io/howtos/easy-setup-docker-gpu/
OPENAI_API_KEY=sk---anystringhere
OPENAI_API_BASE=http://api:8080/v1
# Models to preload at start
# Here we configure gpt4all as gpt-3.5-turbo and bert as embeddings
# Here we configure gpt4all as gpt-3.5-turbo and bert as embeddings,
# see other options in the model gallery at https://github.com/go-skynet/model-gallery
PRELOAD_MODELS=[{"url": "github:go-skynet/model-gallery/openllama-7b-open-instruct.yaml", "name": "gpt-3.5-turbo"}]
## Change the default number of threads

View File

@ -10,9 +10,12 @@ git clone https://github.com/go-skynet/LocalAI
cd LocalAI/examples/functions
cp -rfv .env.example .env
# Edit the .env file to set a different model by editing `PRELOAD_MODELS`.
vim .env
docker-compose run --rm functions
```
Note: The example automatically downloads the `openllama` model as it is under a permissive license.
See the `.env` configuration file to set a different model with the [model-gallery](https://github.com/go-skynet/model-gallery) by editing `PRELOAD_MODELS`.

View File

@ -1,3 +1,6 @@
# CPU .env docs: https://localai.io/howtos/easy-setup-docker-cpu/
# GPU .env docs: https://localai.io/howtos/easy-setup-docker-gpu/
THREADS=4
CONTEXT_SIZE=512
MODELS_PATH=/models

View File

@ -0,0 +1 @@
../models

View File

@ -1 +0,0 @@
{{.Input}}

View File

@ -1,16 +0,0 @@
name: gpt-3.5-turbo
parameters:
model: ggml-gpt4all-j
top_k: 80
temperature: 0.2
top_p: 0.7
context_size: 1024
stopwords:
- "HUMAN:"
- "GPT:"
roles:
user: " "
system: " "
template:
completion: completion
chat: gpt4all

View File

@ -1,4 +0,0 @@
The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write an appropriate response.
### Prompt:
{{.Input}}
### Response:

View File

@ -0,0 +1 @@
../models

View File

@ -1 +0,0 @@
{{.Input}}

View File

@ -1,17 +0,0 @@
name: gpt-3.5-turbo
parameters:
model: gpt2
top_k: 80
temperature: 0.2
top_p: 0.7
context_size: 1024
backend: "langchain-huggingface"
stopwords:
- "HUMAN:"
- "GPT:"
roles:
user: " "
system: " "
template:
completion: completion
chat: gpt4all

View File

@ -1,4 +0,0 @@
The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write an appropriate response.
### Prompt:
{{.Input}}
### Response:

1
examples/langchain/models Symbolic link
View File

@ -0,0 +1 @@
../models

View File

@ -1 +0,0 @@
{{.Input}}

View File

@ -1,17 +0,0 @@
name: gpt-3.5-turbo
parameters:
model: ggml-gpt4all-j # ggml-koala-13B-4bit-128g
top_k: 80
temperature: 0.2
top_p: 0.7
context_size: 1024
stopwords:
- "HUMAN:"
- "GPT:"
roles:
user: " "
system: " "
backend: "gptj"
template:
completion: completion
chat: gpt4all

View File

@ -1,4 +0,0 @@
The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write an appropriate response.
### Prompt:
{{.Input}}
### Response:

View File

@ -8,8 +8,6 @@ services:
dockerfile: Dockerfile
ports:
- 8080:8080
env_file:
- .env
volumes:
- ./models:/models:cached
command: ["/usr/bin/local-ai"]

7
examples/models/.gitignore vendored Normal file
View File

@ -0,0 +1,7 @@
# Ignore everything but predefined models
*
!.gitignore
!completion.tmpl
!embeddings.yaml
!gpt4all.tmpl
!gpt-3.5-turbo.yaml

1
examples/query_data/models Symbolic link
View File

@ -0,0 +1 @@
../models

View File

@ -1 +0,0 @@
{{.Input}}

View File

@ -1,6 +0,0 @@
name: text-embedding-ada-002
parameters:
model: bert
threads: 14
backend: bert-embeddings
embeddings: true

View File

@ -1,16 +0,0 @@
name: gpt-3.5-turbo
parameters:
model: ggml-gpt4all-j
top_k: 80
temperature: 0.2
top_p: 0.7
context_size: 1024
stopwords:
- "HUMAN:"
- "GPT:"
roles:
user: " "
system: " "
template:
completion: completion
chat: gpt4all

View File

@ -1,3 +1,6 @@
# CPU .env docs: https://localai.io/howtos/easy-setup-docker-cpu/
# GPU .env docs: https://localai.io/howtos/easy-setup-docker-gpu/
SLACK_APP_TOKEN=xapp-1-...
SLACK_BOT_TOKEN=xoxb-...
OPENAI_API_KEY=sk-...

View File

@ -18,7 +18,7 @@ git clone https://github.com/seratch/ChatGPT-in-Slack
# Download gpt4all-j to models/
wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j
# Set the discord bot options (see: https://github.com/seratch/ChatGPT-in-Slack)
# Set the Slack bot options (see: https://github.com/seratch/ChatGPT-in-Slack)
cp -rfv .env.example .env
vim .env

View File

@ -1 +1 @@
../chatbot-ui/models
../models

View File

@ -1,3 +1,6 @@
# CPU .env docs: https://localai.io/howtos/easy-setup-docker-cpu/
# GPU .env docs: https://localai.io/howtos/easy-setup-docker-gpu/
# Create an app-level token with connections:write scope
SLACK_APP_TOKEN=xapp-1-...
# Install the app into your workspace to grab this token