.github | ||
.vscode | ||
api | ||
cmd/grpc | ||
examples | ||
extra | ||
internal | ||
models | ||
pkg | ||
prompt-templates | ||
tests | ||
.dockerignore | ||
.env | ||
.gitattributes | ||
.gitignore | ||
assets.go | ||
docker-compose.yaml | ||
Dockerfile | ||
Earthfile | ||
entrypoint.sh | ||
go.mod | ||
go.sum | ||
LICENSE | ||
main.go | ||
Makefile | ||
README.md | ||
renovate.json |
LocalAI
💡 Get help - ❓FAQ 💭Discussions 💬 Discord 📖 Documentation website
LocalAI is a drop-in replacement REST API that's compatible with OpenAI API specifications for local inferencing. It allows you to run LLMs (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families that are compatible with the ggml format. Does not require GPU.
Follow LocalAI
Connect with the Creator
Share LocalAI Repository
In a nutshell:
- Local, OpenAI drop-in alternative REST API. You own your data.
- NO GPU required. NO Internet access is required either
- Optional, GPU Acceleration is available in
llama.cpp
-compatible LLMs. See also the build section.
- Optional, GPU Acceleration is available in
- Supports multiple models
- 🏃 Once loaded the first time, it keep models loaded in memory for faster inference
- ⚡ Doesn't shell-out, but uses C++ bindings for a faster inference and better performance.
LocalAI was created by Ettore Di Giacinto and is a community-driven project, focused on making the AI accessible to anyone. Any contribution, feedback and PR is welcome!
Note that this started just as a fun weekend project in order to try to create the necessary pieces for a full AI assistant like ChatGPT
: the community is growing fast and we are working hard to make it better and more stable. If you want to help, please consider contributing (see below)!
🚀 Features
- 📖 Text generation with GPTs (
llama.cpp
,gpt4all.cpp
, ... 📖 and more) - 🗣 Text to Audio
- 🔈 Audio to Text (Audio transcription with
whisper.cpp
) - 🎨 Image generation with stable diffusion
- 🔥 OpenAI functions 🆕
- 🧠 Embeddings generation for vector databases
- ✍️ Constrained grammars
- 🖼️ Download Models directly from Huggingface
🔥🔥 Hot topics / Roadmap
- Support for embeddings
- Support for audio transcription with https://github.com/ggerganov/whisper.cpp
- Support for text-to-audio
- GPU/CUDA support ( https://github.com/go-skynet/LocalAI/issues/69 )
- Enable automatic downloading of models from a curated gallery
- Enable automatic downloading of models from HuggingFace
- Upstream our golang bindings to llama.cpp (https://github.com/ggerganov/llama.cpp/issues/351)
- Enable gallery management directly from the webui.
- 🔥 OpenAI functions: https://github.com/go-skynet/LocalAI/issues/588
- 🔥 GPTQ support: https://github.com/go-skynet/LocalAI/issues/796
📖 🎥 Media, Blogs, Social
- Create a slackbot for teams and OSS projects that answer to documentation
- LocalAI meets k8sgpt
- Question Answering on Documents locally with LangChain, LocalAI, Chroma, and GPT4All
- Tutorial to use k8sgpt with LocalAI
💻 Usage
Check out the Getting started section. Here below you will find generic, quick instructions to get ready and use LocalAI.
The easiest way to run LocalAI is by using docker-compose
(to build locally, see building LocalAI):
git clone https://github.com/go-skynet/LocalAI
cd LocalAI
# (optional) Checkout a specific LocalAI tag
# git checkout -b build <TAG>
# copy your models to models/
cp your-model.bin models/
# (optional) Edit the .env file to set things like context size and threads
# vim .env
# start with docker-compose
docker-compose up -d --pull always
# or you can build the images with:
# docker-compose up -d --build
# Now API is accessible at localhost:8080
curl http://localhost:8080/v1/models
# {"object":"list","data":[{"id":"your-model.bin","object":"model"}]}
curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
"model": "your-model.bin",
"prompt": "A long time ago in a galaxy far, far away",
"temperature": 0.7
}'
💡 Example: Use GPT4ALL-J model
# Clone LocalAI
git clone https://github.com/go-skynet/LocalAI
cd LocalAI
# (optional) Checkout a specific LocalAI tag
# git checkout -b build <TAG>
# Download gpt4all-j to models/
wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j
# Use a template from the examples
cp -rf prompt-templates/ggml-gpt4all-j.tmpl models/
# (optional) Edit the .env file to set things like context size and threads
# vim .env
# start with docker-compose
docker-compose up -d --pull always
# or you can build the images with:
# docker-compose up -d --build
# Now API is accessible at localhost:8080
curl http://localhost:8080/v1/models
# {"object":"list","data":[{"id":"ggml-gpt4all-j","object":"model"}]}
curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "ggml-gpt4all-j",
"messages": [{"role": "user", "content": "How are you?"}],
"temperature": 0.9
}'
# {"model":"ggml-gpt4all-j","choices":[{"message":{"role":"assistant","content":"I'm doing well, thanks. How about you?"}}]}
🔗 Resources
❤️ Sponsors
Do you find LocalAI useful?
Support the project by becoming a backer or sponsor. Your logo will show up here with a link to your website.
A huge thank you to our generous sponsors who support this project:
Spectro Cloud |
Spectro Cloud kindly supports LocalAI by providing GPU and computing resources to run tests on lamdalabs! |
🌟 Star history
📖 License
LocalAI is a community-driven project created by Ettore Di Giacinto.
MIT - Author Ettore Di Giacinto
🙇 Acknowledgements
LocalAI couldn't have been built without the help of great software already available from the community. Thank you!
- llama.cpp
- https://github.com/tatsu-lab/stanford_alpaca
- https://github.com/cornelk/llama-go for the initial ideas
- https://github.com/antimatter15/alpaca.cpp
- https://github.com/EdVince/Stable-Diffusion-NCNN
- https://github.com/ggerganov/whisper.cpp
- https://github.com/saharNooby/rwkv.cpp
- https://github.com/rhasspy/piper
- https://github.com/cmp-nct/ggllm.cpp
🤗 Contributors
This is a community project, a special thanks to our contributors! 🤗