diff --git a/LICENSE b/LICENSE
index b9c46f0a..ad671f24 100644
--- a/LICENSE
+++ b/LICENSE
@@ -1,6 +1,6 @@
MIT License
-Copyright (c) 2023 go-skynet authors
+Copyright (c) 2023 Ettore Di Giacinto
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
diff --git a/README.md b/README.md
index 60f2ca65..3560f1ca 100644
--- a/README.md
+++ b/README.md
@@ -9,65 +9,32 @@
[![](https://dcbadge.vercel.app/api/server/uJAeKSAGDy?style=flat-square&theme=default-inverted)](https://discord.gg/uJAeKSAGDy)
-**LocalAI** is a drop-in replacement REST API that's compatible with OpenAI API specifications for local inferencing. It allows you to run models locally or on-prem with consumer grade hardware, supporting multiple model families that are compatible with the ggml format.
+**LocalAI** is a drop-in replacement REST API thatβs compatible with OpenAI API specifications for local inferencing. It allows you to run LLMs (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families that are compatible with the ggml format. Does not require GPU.
-For a list of the supported model families, please see [the model compatibility table below](https://github.com/go-skynet/LocalAI#model-compatibility-table).
+For a list of the supported model families, please see [the model compatibility table](https://localai.io/model-compatibility/index.html#model-compatibility-table).
In a nutshell:
- Local, OpenAI drop-in alternative REST API. You own your data.
-- NO GPU required. NO Internet access is required either. Optional, GPU Acceleration is available in `llama.cpp`-compatible LLMs. [See building instructions](https://github.com/go-skynet/LocalAI#cublas).
+- NO GPU required. NO Internet access is required either. Optional, GPU Acceleration is available in `llama.cpp`-compatible LLMs. [See building instructions](https://localai.io/basics/build/index.html).
- Supports multiple models, Audio transcription, Text generation with GPTs, Image generation with stable diffusion (experimental)
- Once loaded the first time, it keep models loaded in memory for faster inference
- Doesn't shell-out, but uses C++ bindings for a faster inference and better performance.
-LocalAI is a community-driven project, focused on making the AI accessible to anyone. Any contribution, feedback and PR is welcome! It was initially created by [mudler](https://github.com/mudler/) at the [SpectroCloud OSS Office](https://github.com/spectrocloud).
+LocalAI was created by [Ettore Di Giacinto](https://github.com/mudler/) and is a community-driven project, focused on making the AI accessible to anyone. Any contribution, feedback and PR is welcome!
-See the [usage](https://github.com/go-skynet/LocalAI#usage) and [examples](https://github.com/go-skynet/LocalAI/tree/master/examples/) sections to learn how to use LocalAI. For a list of curated models check out the [model gallery](https://github.com/go-skynet/model-gallery).
+| [ChatGPT OSS alternative](https://github.com/go-skynet/LocalAI/tree/update_docs_2/examples/chatbot-ui) | [Image generation](https://localai.io/api-endpoints/index.html#image-generation) |
+|------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------|
+| ![Screenshot from 2023-04-26 23-59-55](https://user-images.githubusercontent.com/2420543/234715439-98d12e03-d3ce-4f94-ab54-2b256808e05e.png) | ![b6441997879](https://github.com/go-skynet/LocalAI/assets/2420543/d50af51c-51b7-4f39-b6c2-bf04c403894c) |
-### How does it work?
-
-
-LocalAI is an API written in Go that serves as an OpenAI shim, enabling software already developed with OpenAI SDKs to seamlessly integrate with LocalAI. It can be effortlessly implemented as a substitute, even on consumer-grade hardware. This capability is achieved by employing various C++ backends, including [ggml](https://github.com/ggerganov/ggml), to perform inference on LLMs using both CPU and, if desired, GPU.
-
-LocalAI uses C++ bindings for optimizing speed. It is based on [llama.cpp](https://github.com/ggerganov/llama.cpp), [gpt4all](https://github.com/nomic-ai/gpt4all), [rwkv.cpp](https://github.com/saharNooby/rwkv.cpp), [ggml](https://github.com/ggerganov/ggml), [whisper.cpp](https://github.com/ggerganov/whisper.cpp) for audio transcriptions, [bert.cpp](https://github.com/skeskinen/bert.cpp) for embedding and [StableDiffusion-NCN](https://github.com/EdVince/Stable-Diffusion-NCNN) for image generation. See [the model compatibility table](https://github.com/go-skynet/LocalAI#model-compatibility-table) to learn about all the components of LocalAI.
-
-![LocalAI](https://github.com/go-skynet/LocalAI/assets/2420543/38de3a9b-3866-48cd-9234-662f9571064a)
-
-
+See the [Getting started](https://localai.io/basics/getting_started/index.html) and [examples](https://github.com/go-skynet/LocalAI/tree/master/examples/) sections to learn how to use LocalAI. For a list of curated models check out the [model gallery](https://github.com/go-skynet/model-gallery).
## News
-- 23-05-2023: __v1.15.0__ released. `go-gpt2.cpp` backend got renamed to `go-ggml-transformers.cpp` updated including https://github.com/ggerganov/llama.cpp/pull/1508 which breaks compatibility with older models. This impacts RedPajama, GptNeoX, MPT(not `gpt4all-mpt`), Dolly, GPT2 and Starcoder based models. [Binary releases available](https://github.com/go-skynet/LocalAI/releases), various fixes, including https://github.com/go-skynet/LocalAI/pull/341 .
-- 21-05-2023: __v1.14.0__ released. Minor updates to the `/models/apply` endpoint, `llama.cpp` backend updated including https://github.com/ggerganov/llama.cpp/pull/1508 which breaks compatibility with older models. `gpt4all` is still compatible with the old format.
-- 19-05-2023: __v1.13.0__ released! π₯π₯ updates to the `gpt4all` and `llama` backend, consolidated CUDA support ( https://github.com/go-skynet/LocalAI/pull/310 thanks to @bubthegreat and @Thireus ), preliminar support for [installing models via API](https://github.com/go-skynet/LocalAI#advanced-prepare-models-using-the-api).
-- 17-05-2023: __v1.12.0__ released! π₯π₯ Minor fixes, plus CUDA (https://github.com/go-skynet/LocalAI/pull/258) support for `llama.cpp`-compatible models and image generation (https://github.com/go-skynet/LocalAI/pull/272).
-- 16-05-2023: π₯π₯π₯ Experimental support for CUDA (https://github.com/go-skynet/LocalAI/pull/258) in the `llama.cpp` backend and Stable diffusion CPU image generation (https://github.com/go-skynet/LocalAI/pull/272) in `master`.
+- 29-05-2023: LocalAI now has a website, [https://localai.io](https://localai.io)! check the news in the [dedicated section](https://localai.io/basics/news/index.html)!
-Now LocalAI can generate images too:
-
-| mode=0 | mode=1 (winograd/sgemm) |
-|------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------|
-| ![b6441997879](https://github.com/go-skynet/LocalAI/assets/2420543/d50af51c-51b7-4f39-b6c2-bf04c403894c) | ![winograd2](https://github.com/go-skynet/LocalAI/assets/2420543/1935a69a-ecce-4afc-a099-1ac28cb649b3) |
-
-- 14-05-2023: __v1.11.1__ released! `rwkv` backend patch release
-- 13-05-2023: __v1.11.0__ released! π₯ Updated `llama.cpp` bindings: This update includes a breaking change in the model files ( https://github.com/ggerganov/llama.cpp/pull/1405 ) - old models should still work with the `gpt4all-llama` backend.
-- 12-05-2023: __v1.10.0__ released! π₯π₯ Updated `gpt4all` bindings. Added support for GPTNeox (experimental), RedPajama (experimental), Starcoder (experimental), Replit (experimental), MosaicML MPT. Also now `embeddings` endpoint supports tokens arrays. See the [langchain-chroma](https://github.com/go-skynet/LocalAI/tree/master/examples/langchain-chroma) example! Note - this update does NOT include https://github.com/ggerganov/llama.cpp/pull/1405 which makes models incompatible.
-- 11-05-2023: __v1.9.0__ released! π₯ Important whisper updates ( https://github.com/go-skynet/LocalAI/pull/233 https://github.com/go-skynet/LocalAI/pull/229 ) and extended gpt4all model families support ( https://github.com/go-skynet/LocalAI/pull/232 ). Redpajama/dolly experimental ( https://github.com/go-skynet/LocalAI/pull/214 )
-- 10-05-2023: __v1.8.0__ released! π₯ Added support for fast and accurate embeddings with `bert.cpp` ( https://github.com/go-skynet/LocalAI/pull/222 )
-- 09-05-2023: Added experimental support for transcriptions endpoint ( https://github.com/go-skynet/LocalAI/pull/211 )
-- 08-05-2023: Support for embeddings with models using the `llama.cpp` backend ( https://github.com/go-skynet/LocalAI/pull/207 )
-- 02-05-2023: Support for `rwkv.cpp` models ( https://github.com/go-skynet/LocalAI/pull/158 ) and for `/edits` endpoint
-- 01-05-2023: Support for SSE stream of tokens in `llama.cpp` backends ( https://github.com/go-skynet/LocalAI/pull/152 )
-
-Twitter: [@LocalAI_API](https://twitter.com/LocalAI_API) and [@mudler_it](https://twitter.com/mudler_it)
-
-### Blogs, articles, media
-
-- [LocalAI meets k8sgpt](https://www.youtube.com/watch?v=PKrDNuJ_dfE) - CNCF Webinar showcasing LocalAI and k8sgpt.
-- [Question Answering on Documents locally with LangChain, LocalAI, Chroma, and GPT4All](https://mudler.pm/posts/localai-question-answering/) by Ettore Di Giacinto
-- [Tutorial to use k8sgpt with LocalAI](https://medium.com/@tyler_97636/k8sgpt-localai-unlock-kubernetes-superpowers-for-free-584790de9b65) - excellent usecase for localAI, using AI to analyse Kubernetes clusters. by Tyller Gillson
+For latest news, follow also on Twitter [@LocalAI_API](https://twitter.com/LocalAI_API) and [@mudler_it](https://twitter.com/mudler_it)
## Contribute and help
@@ -81,75 +48,11 @@ To help the project you can:
- If you don't have technological skills you can still help improving documentation or add examples or share your user-stories with our community, any help and contribution is welcome!
-## Model compatibility
-
-It is compatible with the models supported by [llama.cpp](https://github.com/ggerganov/llama.cpp) supports also [GPT4ALL-J](https://github.com/nomic-ai/gpt4all) and [cerebras-GPT with ggml](https://huggingface.co/lxe/Cerebras-GPT-2.7B-Alpaca-SP-ggml).
-
-Tested with:
-- Vicuna
-- Alpaca
-- [GPT4ALL](https://gpt4all.io)
-- [GPT4ALL-J](https://gpt4all.io/models/ggml-gpt4all-j.bin) (no changes required)
-- Koala
-- [cerebras-GPT with ggml](https://huggingface.co/lxe/Cerebras-GPT-2.7B-Alpaca-SP-ggml)
-- WizardLM
-- [RWKV](https://github.com/BlinkDL/RWKV-LM) models with [rwkv.cpp](https://github.com/saharNooby/rwkv.cpp)
-
-Note: You might need to convert some models from older models to the new format, for indications, see [the README in llama.cpp](https://github.com/ggerganov/llama.cpp#using-gpt4all) for instance to run `gpt4all`.
-
-### RWKV
-
-
-
-A full example on how to run a rwkv model is in the [examples](https://github.com/go-skynet/LocalAI/tree/master/examples/rwkv).
-
-Note: rwkv models needs to specify the backend `rwkv` in the YAML config files and have an associated tokenizer along that needs to be provided with it:
-
-```
-36464540 -rw-r--r-- 1 mudler mudler 1.2G May 3 10:51 rwkv_small
-36464543 -rw-r--r-- 1 mudler mudler 2.4M May 3 10:51 rwkv_small.tokenizer.json
-```
-
-
-
-### Others
-
-It should also be compatible with StableLM and GPTNeoX ggml models (untested).
-
-### Hardware requirements
-
-Depending on the model you are attempting to run might need more RAM or CPU resources. Check out also [here](https://github.com/ggerganov/llama.cpp#memorydisk-requirements) for `ggml` based backends. `rwkv` is less expensive on resources.
-
-
-### Model compatibility table
-
-
-
-| Backend and Bindings | Compatible models | Completion/Chat endpoint | Audio transcription/Image | Embeddings support | Token stream support |
-|----------------------------------------------------------------------------------|-----------------------|--------------------------|---------------------------|-----------------------------------|----------------------|
-| [llama](https://github.com/ggerganov/llama.cpp) ([binding](https://github.com/go-skynet/go-llama.cpp)) | Vicuna, Alpaca, LLaMa | yes | no | yes (doesn't seem to be accurate) | yes |
-| [gpt4all-llama](https://github.com/nomic-ai/gpt4all) | Vicuna, Alpaca, LLaMa | yes | no | no | yes |
-| [gpt4all-mpt](https://github.com/nomic-ai/gpt4all) | MPT | yes | no | no | yes |
-| [gpt4all-j](https://github.com/nomic-ai/gpt4all) | GPT4ALL-J | yes | no | no | yes |
-| [gpt2](https://github.com/ggerganov/ggml) ([binding](https://github.com/go-skynet/go-ggml-transformers.cpp)) | GPT2, Cerebras | yes | no | no | no |
-| [dolly](https://github.com/ggerganov/ggml) ([binding](https://github.com/go-skynet/go-ggml-transformers.cpp)) | Dolly | yes | no | no | no |
-| [gptj](https://github.com/ggerganov/ggml) ([binding](https://github.com/go-skynet/go-ggml-transformers.cpp)) | GPTJ | yes | no | no | no |
-| [mpt](https://github.com/ggerganov/ggml) ([binding](https://github.com/go-skynet/go-ggml-transformers.cpp)) | MPT | yes | no | no | no |
-| [replit](https://github.com/ggerganov/ggml) ([binding](https://github.com/go-skynet/go-ggml-transformers.cpp)) | Replit | yes | no | no | no |
-| [gptneox](https://github.com/ggerganov/ggml) ([binding](https://github.com/go-skynet/go-ggml-transformers.cpp)) | GPT NeoX, RedPajama, StableLM | yes | no | no | no |
-| [starcoder](https://github.com/ggerganov/ggml) ([binding](https://github.com/go-skynet/go-ggml-transformers.cpp)) | Starcoder | yes | no | no | no |
-| [bloomz](https://github.com/NouamaneTazi/bloomz.cpp) ([binding](https://github.com/go-skynet/bloomz.cpp)) | Bloom | yes | no | no | no |
-| [rwkv](https://github.com/saharNooby/rwkv.cpp) ([binding](https://github.com/donomii/go-rw)) | rwkv | yes | no | no | yes |
-| [bert](https://github.com/skeskinen/bert.cpp) ([binding](https://github.com/go-skynet/go-bert.cpp) | bert | no | no | yes | no |
-| [whisper](https://github.com/ggerganov/whisper.cpp) | whisper | no | Audio | no | no |
-| [stablediffusion](https://github.com/EdVince/Stable-Diffusion-NCNN) ([binding](https://github.com/mudler/go-stable-diffusion)) | stablediffusion | no | Image | no | no |
-
-
## Usage
-> `LocalAI` comes by default as a container image. You can check out all the available images with corresponding tags [here](https://quay.io/repository/go-skynet/local-ai?tab=tags&tag=latest).
+Check out the [Getting started](https://localai.io/basics/getting_started/index.html) section. Here below you will find generic, quick instructions to get ready and use LocalAI.
-The easiest way to run LocalAI is by using `docker-compose` (to build locally, see [building LocalAI](https://github.com/go-skynet/LocalAI/tree/master#setup)):
+The easiest way to run LocalAI is by using `docker-compose` (to build locally, see [building LocalAI](https://localai.io/basics/build/index.html)):
```bash
@@ -222,277 +125,6 @@ curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/jso
```
-### Advanced: prepare models using the API
-
-Instead of installing models manually, you can use the LocalAI API endpoints and a model definition to install programmatically via API models in runtime.
-
-
-
-A curated collection of model files is in the [model-gallery](https://github.com/go-skynet/model-gallery) (work in progress!).
-
-To install for example `gpt4all-j`, you can send a POST call to the `/models/apply` endpoint with the model definition url (`url`) and the name of the model should have in LocalAI (`name`, optional):
-
-```
-curl http://localhost:8080/models/apply -H "Content-Type: application/json" -d '{
- "url": "https://raw.githubusercontent.com/go-skynet/model-gallery/main/gpt4all-j.yaml",
- "name": "gpt4all-j"
- }'
-```
-
-
-
-
-### Other examples
-
-![Screenshot from 2023-04-26 23-59-55](https://user-images.githubusercontent.com/2420543/234715439-98d12e03-d3ce-4f94-ab54-2b256808e05e.png)
-
-To see other examples on how to integrate with other projects for instance for question answering or for using it with chatbot-ui, see: [examples](https://github.com/go-skynet/LocalAI/tree/master/examples/).
-
-
-### Advanced configuration
-
-LocalAI can be configured to serve user-defined models with a set of default parameters and templates.
-
-
-
-You can create multiple `yaml` files in the models path or either specify a single YAML configuration file.
-Consider the following `models` folder in the `example/chatbot-ui`:
-
-```
-base β― ls -liah examples/chatbot-ui/models
-36487587 drwxr-xr-x 2 mudler mudler 4.0K May 3 12:27 .
-36487586 drwxr-xr-x 3 mudler mudler 4.0K May 3 10:42 ..
-36465214 -rw-r--r-- 1 mudler mudler 10 Apr 27 07:46 completion.tmpl
-36464855 -rw-r--r-- 1 mudler mudler 3.6G Apr 27 00:08 ggml-gpt4all-j
-36464537 -rw-r--r-- 1 mudler mudler 245 May 3 10:42 gpt-3.5-turbo.yaml
-36467388 -rw-r--r-- 1 mudler mudler 180 Apr 27 07:46 gpt4all.tmpl
-```
-
-In the `gpt-3.5-turbo.yaml` file it is defined the `gpt-3.5-turbo` model which is an alias to use `gpt4all-j` with pre-defined options.
-
-For instance, consider the following that declares `gpt-3.5-turbo` backed by the `ggml-gpt4all-j` model:
-
-```yaml
-name: gpt-3.5-turbo
-# Default model parameters
-parameters:
- # Relative to the models path
- model: ggml-gpt4all-j
- # temperature
- temperature: 0.3
- # all the OpenAI request options here..
-
-# Default context size
-context_size: 512
-threads: 10
-# Define a backend (optional). By default it will try to guess the backend the first time the model is interacted with.
-backend: gptj # available: llama, stablelm, gpt2, gptj rwkv
-# stopwords (if supported by the backend)
-stopwords:
-- "HUMAN:"
-- "### Response:"
-# define chat roles
-roles:
- user: "HUMAN:"
- system: "GPT:"
-template:
- # template file ".tmpl" with the prompt template to use by default on the endpoint call. Note there is no extension in the files
- completion: completion
- chat: ggml-gpt4all-j
-```
-
-Specifying a `config-file` via CLI allows to declare models in a single file as a list, for instance:
-
-```yaml
-- name: list1
- parameters:
- model: testmodel
- context_size: 512
- threads: 10
- stopwords:
- - "HUMAN:"
- - "### Response:"
- roles:
- user: "HUMAN:"
- system: "GPT:"
- template:
- completion: completion
- chat: ggml-gpt4all-j
-- name: list2
- parameters:
- model: testmodel
- context_size: 512
- threads: 10
- stopwords:
- - "HUMAN:"
- - "### Response:"
- roles:
- user: "HUMAN:"
- system: "GPT:"
- template:
- completion: completion
- chat: ggml-gpt4all-j
-```
-
-See also [chatbot-ui](https://github.com/go-skynet/LocalAI/tree/master/examples/chatbot-ui) as an example on how to use config files.
-
-### Full config model file reference
-
-```yaml
-name: gpt-3.5-turbo
-
-# Default model parameters
-parameters:
- # Relative to the models path
- model: ggml-gpt4all-j
- # temperature
- temperature: 0.3
- # all the OpenAI request options here..
- top_k:
- top_p:
- max_tokens:
- batch:
- f16: true
- ignore_eos: true
- n_keep: 10
- seed:
- mode:
- step:
-
-# Default context size
-context_size: 512
-# Default number of threads
-threads: 10
-# Define a backend (optional). By default it will try to guess the backend the first time the model is interacted with.
-backend: gptj # available: llama, stablelm, gpt2, gptj rwkv
-# stopwords (if supported by the backend)
-stopwords:
-- "HUMAN:"
-- "### Response:"
-# string to trim space to
-trimspace:
-- string
-# Strings to cut from the response
-cutstrings:
-- "string"
-# define chat roles
-roles:
- user: "HUMAN:"
- system: "GPT:"
- assistant: "ASSISTANT:"
-template:
- # template file ".tmpl" with the prompt template to use by default on the endpoint call. Note there is no extension in the files
- completion: completion
- chat: ggml-gpt4all-j
- edit: edit_template
-
-# Enable F16 if backend supports it
-f16: true
-# Enable debugging
-debug: true
-# Enable embeddings
-embeddings: true
-# Mirostat configuration (llama.cpp only)
-mirostat_eta: 0.8
-mirostat_tau: 0.9
-mirostat: 1
-
-# GPU Layers (only used when built with cublas)
-gpu_layers: 22
-
-# Directory used to store additional assets (used for stablediffusion)
-asset_dir: ""
-```
-
-
-### Prompt templates
-
-The API doesn't inject a default prompt for talking to the model. You have to use a prompt similar to what's described in the standford-alpaca docs: https://github.com/tatsu-lab/stanford_alpaca#data-release.
-
-
-You can use a default template for every model present in your model path, by creating a corresponding file with the `.tmpl` suffix next to your model. For instance, if the model is called `foo.bin`, you can create a sibling file, `foo.bin.tmpl` which will be used as a default prompt and can be used with alpaca:
-
-```
-The below instruction describes a task. Write a response that appropriately completes the request.
-
-### Instruction:
-{{.Input}}
-
-### Response:
-```
-
-See the [prompt-templates](https://github.com/go-skynet/LocalAI/tree/master/prompt-templates) directory in this repository for templates for some of the most popular models.
-
-
-For the edit endpoint, an example template for alpaca-based models can be:
-
-```yaml
-Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
-
-### Instruction:
-{{.Instruction}}
-
-### Input:
-{{.Input}}
-
-### Response:
-```
-
-
-
-### CLI
-
-You can control LocalAI with command line arguments, to specify a binding address, or the number of threads.
-
-
-
-Usage:
-
-```
-local-ai --models-path [--address ] [--threads ]
-```
-
-| Parameter | Environment Variable | Default Value | Description |
-| ------------ | -------------------- | ------------- | -------------------------------------- |
-| models-path | MODELS_PATH | | The path where you have models (ending with `.bin`). |
-| threads | THREADS | Number of Physical cores | The number of threads to use for text generation. |
-| address | ADDRESS | :8080 | The address and port to listen on. |
-| context-size | CONTEXT_SIZE | 512 | Default token context size. |
-| debug | DEBUG | false | Enable debug mode. |
-| config-file | CONFIG_FILE | empty | Path to a LocalAI config file. |
-| upload_limit | UPLOAD_LIMIT | 5MB | Upload limit for whisper. |
-| image-path | IMAGE_PATH | empty | Image directory to store and serve processed images. |
-
-
-
-## Setup
-
-Currently LocalAI comes as a container image and can be used with docker or a container engine of choice. You can check out all the available images with corresponding tags [here](https://quay.io/repository/go-skynet/local-ai?tab=tags&tag=latest).
-
-### Docker
-
-
-Example of starting the API with `docker`:
-
-```bash
-docker run -p 8080:8080 -ti --rm quay.io/go-skynet/local-ai:latest --models-path /path/to/models --context-size 700 --threads 4
-```
-
-You should see:
-```
-βββββββββββββββββββββββββββββββββββββββββββββββββββββ
-β Fiber v2.42.0 β
-β http://127.0.0.1:8080 β
-β (bound on host 0.0.0.0 and port 8080) β
-β β
-β Handlers ............. 1 Processes ........... 1 β
-β Prefork ....... Disabled PID ................. 1 β
-βββββββββββββββββββββββββββββββββββββββββββββββββββββ
-```
-
-Note: the binary inside the image is rebuild at the start of the container to enable CPU optimizations for the execution environment, you can set the environment variable `REBUILD` to `false` to prevent this behavior.
-
-
### Build locally
@@ -502,8 +134,8 @@ In order to build the `LocalAI` container image locally you can use `docker`:
```
# build the image
-docker build -t LocalAI .
-docker run LocalAI
+docker build -t localai .
+docker run localai
```
Or you can build the binary with `make`:
@@ -514,520 +146,19 @@ make build
-### Build on mac
-
-Building on Mac (M1 or M2) works, but you may need to install some prerequisites using `brew`.
-
-
-
-The below has been tested by one mac user and found to work. Note that this doesn't use docker to run the server:
-
-```
-# install build dependencies
-brew install cmake
-brew install go
-
-# clone the repo
-git clone https://github.com/go-skynet/LocalAI.git
-
-cd LocalAI
-
-# build the binary
-make build
-
-# Download gpt4all-j to models/
-wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j
-
-# Use a template from the examples
-cp -rf prompt-templates/ggml-gpt4all-j.tmpl models/
-
-# Run LocalAI
-./local-ai --models-path ./models/ --debug
-
-# Now API is accessible at localhost:8080
-curl http://localhost:8080/v1/models
-
-curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
- "model": "ggml-gpt4all-j",
- "messages": [{"role": "user", "content": "How are you?"}],
- "temperature": 0.9
- }'
-```
-
-
-
-### Build with Image generation support
-
-
-
-**Requirements**: OpenCV, Gomp
-
-Image generation is experimental and requires `GO_TAGS=stablediffusion` to be set during build:
-
-```
-make GO_TAGS=stablediffusion rebuild
-```
-
-
-
-### Accelleration
-
-#### OpenBLAS
-
-
-
-Requirements: OpenBLAS
-
-```
-make BUILD_TYPE=openblas build
-```
-
-
-
-#### CuBLAS
-
-
-
-Requirement: Nvidia CUDA toolkit
-
-Note: CuBLAS support is experimental, and has not been tested on real HW. please report any issues you find!
-
-```
-make BUILD_TYPE=cublas build
-```
-
-More informations available in the upstream PR: https://github.com/ggerganov/llama.cpp/pull/1412
-
-
-
-### Windows compatibility
-
-It should work, however you need to make sure you give enough resources to the container. See https://github.com/go-skynet/LocalAI/issues/2
+See the [build section](https://localai.io/basics/build/index.html) in our documentation for detailed instructions.
### Run LocalAI in Kubernetes
-LocalAI can be installed inside Kubernetes with helm.
+LocalAI can be installed inside Kubernetes with helm. See [installation instructions](https://localai.io/basics/getting_started/index.html#run-localai-in-kubernetes).
-
-By default, the helm chart will install LocalAI instance using the ggml-gpt4all-j model without persistent storage.
+## Supported API endpoints
-1. Add the helm repo
- ```bash
- helm repo add go-skynet https://go-skynet.github.io/helm-charts/
- ```
-2. Install the helm chart:
- ```bash
- helm repo update
- helm install local-ai go-skynet/local-ai -f values.yaml
- ```
-> **Note:** For further configuration options, see the [helm chart repository on GitHub](https://github.com/go-skynet/helm-charts).
-### Example values
-Deploy a single LocalAI pod with 6GB of persistent storage serving up a `ggml-gpt4all-j` model with custom prompt.
-```yaml
-### values.yaml
-
-deployment:
- # Adjust the number of threads and context size for model inference
- env:
- threads: 14
- contextSize: 512
-
-# Set the pod requests/limits
-resources:
- limits:
- cpu: 4000m
- memory: 7000Mi
- requests:
- cpu: 100m
- memory: 6000Mi
-
-# Add a custom prompt template for the ggml-gpt4all-j model
-promptTemplates:
- # The name of the model this template belongs to
- ggml-gpt4all-j.bin.tmpl: |
- This is my custom prompt template...
- ### Prompt:
- {{.Input}}
- ### Response:
-
-# Model configuration
-models:
- # Don't re-download models on pod creation
- forceDownload: false
-
- # List of models to download and serve
- list:
- - url: "https://gpt4all.io/models/ggml-gpt4all-j.bin"
- # Optional basic HTTP authentication
- basicAuth: base64EncodedCredentials
-
- # Enable 6Gb of persistent storage models and prompt templates
- persistence:
- enabled: true
- size: 6Gi
-
-service:
- type: ClusterIP
- annotations: {}
- # If using an AWS load balancer, you'll need to override the default 60s load balancer idle timeout
- # service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "1200"
-```
-
-
-## Supported OpenAI API endpoints
-
-You can check out the [OpenAI API reference](https://platform.openai.com/docs/api-reference/chat/create).
-
-Following the list of endpoints/parameters supported.
-
-Note:
-
-- You can also specify the model as part of the OpenAI token.
-- If only one model is available, the API will use it for all the requests.
-
-### Chat completions
-
-
-For example, to generate a chat completion, you can send a POST request to the `/v1/chat/completions` endpoint with the instruction as the request body:
-
-```
-curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
- "model": "ggml-koala-7b-model-q4_0-r2.bin",
- "messages": [{"role": "user", "content": "Say this is a test!"}],
- "temperature": 0.7
- }'
-```
-
-Available additional parameters: `top_p`, `top_k`, `max_tokens`
-
-
-### Edit completions
-
-
-To generate an edit completion you can send a POST request to the `/v1/edits` endpoint with the instruction as the request body:
-
-```
-curl http://localhost:8080/v1/edits -H "Content-Type: application/json" -d '{
- "model": "ggml-koala-7b-model-q4_0-r2.bin",
- "instruction": "rephrase",
- "input": "Black cat jumped out of the window",
- "temperature": 0.7
- }'
-```
-
-Available additional parameters: `top_p`, `top_k`, `max_tokens`.
-
-
-
-### Completions
-
-
-
-To generate a completion, you can send a POST request to the `/v1/completions` endpoint with the instruction as per the request body:
-
-```
-curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
- "model": "ggml-koala-7b-model-q4_0-r2.bin",
- "prompt": "A long time ago in a galaxy far, far away",
- "temperature": 0.7
- }'
-```
-
-Available additional parameters: `top_p`, `top_k`, `max_tokens`
-
-
-
-### List models
-
-
-You can list all the models available with:
-
-```
-curl http://localhost:8080/v1/models
-```
-
-
-
-### Embeddings
-
-OpenAI docs: https://platform.openai.com/docs/api-reference/embeddings
-
-
-
-The embedding endpoint is experimental and enabled only if the model is configured with `embeddings: true` in its `yaml` file, for example:
-
-```yaml
-name: text-embedding-ada-002
-parameters:
- model: bert
-embeddings: true
-backend: "bert-embeddings"
-```
-
-There is an example available [here](https://github.com/go-skynet/LocalAI/tree/master/examples/query_data/).
-
-Note: embeddings is supported only with `llama.cpp` compatible models and `bert` models. bert is more performant and available independently of the LLM model.
-
-
-
-### Transcriptions endpoint
-
-
-
-Note: requires ffmpeg in the container image, which is currently not shipped due to licensing issues. We will prepare separated images with ffmpeg. (stay tuned!)
-
-Download one of the models from https://huggingface.co/ggerganov/whisper.cpp/tree/main in the `models` folder, and create a YAML file for your model:
-
-```yaml
-name: whisper-1
-backend: whisper
-parameters:
- model: whisper-en
-```
-
-The transcriptions endpoint then can be tested like so:
-```
-wget --quiet --show-progress -O gb1.ogg https://upload.wikimedia.org/wikipedia/commons/1/1f/George_W_Bush_Columbia_FINAL.ogg
-
-curl http://localhost:8080/v1/audio/transcriptions -H "Content-Type: multipart/form-data" -F file="@$PWD/gb1.ogg" -F model="whisper-1"
-
-{"text":"My fellow Americans, this day has brought terrible news and great sadness to our country.At nine o'clock this morning, Mission Control in Houston lost contact with our Space ShuttleColumbia.A short time later, debris was seen falling from the skies above Texas.The Columbia's lost.There are no survivors.One board was a crew of seven.Colonel Rick Husband, Lieutenant Colonel Michael Anderson, Commander Laurel Clark, Captain DavidBrown, Commander William McCool, Dr. Kultna Shavla, and Elon Ramon, a colonel in the IsraeliAir Force.These men and women assumed great risk in the service to all humanity.In an age when spaceflight has come to seem almost routine, it is easy to overlook thedangers of travel by rocket and the difficulties of navigating the fierce outer atmosphere ofthe Earth.These astronauts knew the dangers, and they faced them willingly, knowing they had a highand noble purpose in life.Because of their courage and daring and idealism, we will miss them all the more.All Americans today are thinking as well of the families of these men and women who havebeen given this sudden shock and grief.You're not alone.Our entire nation agrees with you, and those you loved will always have the respect andgratitude of this country.The cause in which they died will continue.Mankind has led into the darkness beyond our world by the inspiration of discovery andthe longing to understand.Our journey into space will go on.In the skies today, we saw destruction and tragedy.As farther than we can see, there is comfort and hope.In the words of the prophet Isaiah, \"Lift your eyes and look to the heavens who createdall these, he who brings out the starry hosts one by one and calls them each by name.\"Because of his great power and mighty strength, not one of them is missing.The same creator who names the stars also knows the names of the seven souls we mourntoday.The crew of the shuttle Columbia did not return safely to Earth yet we can pray that all aresafely home.May God bless the grieving families and may God continue to bless America.[BLANK_AUDIO]"}
-```
-
-
-
-### Image generation
-
-OpenAI docs: https://platform.openai.com/docs/api-reference/images/create
-
-LocalAI supports generating images with Stable diffusion, running on CPU.
-
-| mode=0 | mode=1 (winograd/sgemm) |
-|------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------|
-| ![test](https://github.com/go-skynet/LocalAI/assets/2420543/7145bdee-4134-45bb-84d4-f11cb08a5638) | ![b643343452981](https://github.com/go-skynet/LocalAI/assets/2420543/abf14de1-4f50-4715-aaa4-411d703a942a) |
-| ![b6441997879](https://github.com/go-skynet/LocalAI/assets/2420543/d50af51c-51b7-4f39-b6c2-bf04c403894c) | ![winograd2](https://github.com/go-skynet/LocalAI/assets/2420543/1935a69a-ecce-4afc-a099-1ac28cb649b3) |
-| ![winograd](https://github.com/go-skynet/LocalAI/assets/2420543/1979a8c4-a70d-4602-95ed-642f382f6c6a) | ![winograd3](https://github.com/go-skynet/LocalAI/assets/2420543/e6d184d4-5002-408f-b564-163986e1bdfb) |
-
-
-
-To generate an image you can send a POST request to the `/v1/images/generations` endpoint with the instruction as the request body:
-
-```bash
-# 512x512 is supported too
-curl http://localhost:8080/v1/images/generations -H "Content-Type: application/json" -d '{
- "prompt": "A cute baby sea otter",
- "size": "256x256"
- }'
-```
-
-Available additional parameters: `mode`, `step`.
-
-Note: To set a negative prompt, you can split the prompt with `|`, for instance: `a cute baby sea otter|malformed`.
-
-```bash
-curl http://localhost:8080/v1/images/generations -H "Content-Type: application/json" -d '{
- "prompt": "floating hair, portrait, ((loli)), ((one girl)), cute face, hidden hands, asymmetrical bangs, beautiful detailed eyes, eye shadow, hair ornament, ribbons, bowties, buttons, pleated skirt, (((masterpiece))), ((best quality)), colorful|((part of the head)), ((((mutated hands and fingers)))), deformed, blurry, bad anatomy, disfigured, poorly drawn face, mutation, mutated, extra limb, ugly, poorly drawn hands, missing limb, blurry, floating limbs, disconnected limbs, malformed hands, blur, out of focus, long neck, long body, Octane renderer, lowres, bad anatomy, bad hands, text",
- "size": "256x256"
- }'
-```
-
-Note: image generator supports images up to 512x512. You can use other tools however to upscale the image, for instance: https://github.com/upscayl/upscayl.
-
-#### Setup
-
-Note: In order to use the `images/generation` endpoint, you need to build LocalAI with `GO_TAGS=stablediffusion`.
-
-1. Create a model file `stablediffusion.yaml` in the models folder:
-
-```yaml
-name: stablediffusion
-backend: stablediffusion
-asset_dir: stablediffusion_assets
-```
-2. Create a `stablediffusion_assets` directory inside your `models` directory
-3. Download the ncnn assets from https://github.com/EdVince/Stable-Diffusion-NCNN#out-of-box and place them in `stablediffusion_assets`.
-
-The models directory should look like the following:
-
-```
-models
-βββ stablediffusion_assets
-βΒ Β βββ AutoencoderKL-256-256-fp16-opt.param
-βΒ Β βββ AutoencoderKL-512-512-fp16-opt.param
-βΒ Β βββ AutoencoderKL-base-fp16.param
-βΒ Β βββ AutoencoderKL-encoder-512-512-fp16.bin
-βΒ Β βββ AutoencoderKL-fp16.bin
-βΒ Β βββ FrozenCLIPEmbedder-fp16.bin
-βΒ Β βββ FrozenCLIPEmbedder-fp16.param
-βΒ Β βββ log_sigmas.bin
-βΒ Β βββ tmp-AutoencoderKL-encoder-256-256-fp16.param
-βΒ Β βββ UNetModel-256-256-MHA-fp16-opt.param
-βΒ Β βββ UNetModel-512-512-MHA-fp16-opt.param
-βΒ Β βββ UNetModel-base-MHA-fp16.param
-βΒ Β βββ UNetModel-MHA-fp16.bin
-βΒ Β βββ vocab.txt
-βββ stablediffusion.yaml
-```
-
-
-
-## LocalAI API endpoints
-
-Besides the OpenAI endpoints, there are additional LocalAI-only API endpoints.
-
-### Applying a model - `/models/apply`
-
-This endpoint can be used to install a model in runtime.
-
-
-
-LocalAI will create a batch process that downloads the required files from a model definition and automatically reload itself to include the new model.
-
-Input: `url`, `name` (optional), `files` (optional)
-
-```bash
-curl http://localhost:8080/models/apply -H "Content-Type: application/json" -d '{
- "url": "",
- "name": "",
- "files": [
- {
- "uri": "",
- "sha256": "",
- "filename": ""
- },
- "overrides": { "backend": "...", "f16": true }
- ]
- }
-```
-
-An optional, list of additional files can be specified to be downloaded within `files`. The `name` allows to override the model name. Finally it is possible to override the model config file with `override`.
-
-Returns an `uuid` and an `url` to follow up the state of the process:
-
-```json
-{ "uuid":"251475c9-f666-11ed-95e0-9a8a4480ac58", "status":"http://localhost:8080/models/jobs/251475c9-f666-11ed-95e0-9a8a4480ac58"}
-```
-
-To see a collection example of curated models definition files, see the [model-gallery](https://github.com/go-skynet/model-gallery).
-
-
-
-### Inquiry model job state `/models/jobs/`
-
-This endpoint returns the state of the batch job associated to a model
-
-
-This endpoint can be used with the uuid returned by `/models/apply` to check a job state:
-
-```bash
-curl http://localhost:8080/models/jobs/251475c9-f666-11ed-95e0-9a8a4480ac58
-```
-
-Returns a json containing the error, and if the job is being processed:
-
-```json
-{"error":null,"processed":true,"message":"completed"}
-```
-
-
-
-## Clients
-
-OpenAI clients are already compatible with LocalAI by overriding the basePath, or the target URL.
-
-## Javascript
-
-
-
-https://github.com/openai/openai-node/
-
-```javascript
-import { Configuration, OpenAIApi } from 'openai';
-
-const configuration = new Configuration({
- basePath: `http://localhost:8080/v1`
-});
-const openai = new OpenAIApi(configuration);
-```
-
-
-
-## Python
-
-
-
-https://github.com/openai/openai-python
-
-Set the `OPENAI_API_BASE` environment variable, or by code:
-
-```python
-import openai
-
-openai.api_base = "http://localhost:8080/v1"
-
-# create a chat completion
-chat_completion = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hello world"}])
-
-# print the completion
-print(completion.choices[0].message.content)
-```
-
-
+See the [list of the supported API endpoints](https://localai.io/api-endpoints/index.html) and how to configure image generation and audio transcription.
## Frequently asked questions
-Here are answers to some of the most common questions.
-
-
-### How do I get models?
-
-
-
-Most ggml-based models should work, but newer models may require additions to the API. If a model doesn't work, please feel free to open up issues. However, be cautious about downloading models from the internet and directly onto your machine, as there may be security vulnerabilities in lama.cpp or ggml that could be maliciously exploited. Some models can be found on Hugging Face: https://huggingface.co/models?search=ggml, or models from gpt4all should also work: https://github.com/nomic-ai/gpt4all.
-
-
-
-### What's the difference with Serge, or XXX?
-
-
-
-
-LocalAI is a multi-model solution that doesn't focus on a specific model type (e.g., llama.cpp or alpaca.cpp), and it handles all of these internally for faster inference, easy to set up locally and deploy to Kubernetes.
-
-
-
-
-### Can I use it with a Discord bot, or XXX?
-
-
-
-Yes! If the client uses OpenAI and supports setting a different base URL to send requests to, you can use the LocalAI endpoint. This allows to use this with every application that was supposed to work with OpenAI, but without changing the application!
-
-
-
-
-### Can this leverage GPUs?
-
-
-
-There is partial GPU support, see build instructions above.
-
-
-
-### Where is the webUI?
-
-
-There is the availability of localai-webui and chatbot-ui in the examples section and can be setup as per the instructions. However as LocalAI is an API you can already plug it into existing projects that provides are UI interfaces to OpenAI's APIs. There are several already on github, and should be compatible with LocalAI already (as it mimics the OpenAI API)
-
-
-
-### Does it work with AutoGPT?
-
-
-
-Yes, see the [examples](https://github.com/go-skynet/LocalAI/tree/master/examples/)!
-
-
+See [the FAQ](https://localai.io/faq/index.html) section for a list of common questions.
## Projects already using LocalAI to run local models
@@ -1058,17 +189,13 @@ Feel free to open up a PR to get your project listed!
## License
-LocalAI is a community-driven project. It was initially created by [Ettore Di Giacinto](https://github.com/mudler/) at the [SpectroCloud OSS Office](https://github.com/spectrocloud).
+LocalAI is a community-driven project created by [Ettore Di Giacinto](https://github.com/mudler/).
MIT
-## Golang bindings used
+## Author
-- [go-skynet/go-llama.cpp](https://github.com/go-skynet/go-llama.cpp)
-- [go-skynet/go-gpt4all-j.cpp](https://github.com/go-skynet/go-gpt4all-j.cpp)
-- [go-skynet/go-ggml-transformers.cpp](https://github.com/go-skynet/go-ggml-transformers.cpp)
-- [go-skynet/go-bert.cpp](https://github.com/go-skynet/go-bert.cpp)
-- [donomii/go-rwkv.cpp](https://github.com/donomii/go-rwkv.cpp)
+Ettore Di Giacinto and others
## Acknowledgements