🤖 The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs ggml, gguf, GPTQ, onnx, TF compatible models: llama, llama2, rwkv, whisper, vicuna, koala, cerebras, falcon, dolly, starcoder, and many others
Go to file
renovate[bot] ca9115d6d0
fix(deps): update github.com/go-skynet/go-ggml-transformers.cpp digest to 13ccc22 (#427)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-05-30 11:34:13 +02:00
.devcontainer feature: add devcontainer for live debugging (#60) 2023-04-22 01:20:03 +02:00
.github ci: fix typo in variable name (#424) 2023-05-29 23:11:29 +02:00
.vscode fix: missing model path in launch.json (#309) 2023-05-19 16:39:48 +02:00
api feat: allow to set a prompt cache path and enable saving state (#395) 2023-05-27 14:29:11 +02:00
examples Bump rwkv (#402) 2023-05-28 22:59:25 +02:00
models Add docker-compose 2023-04-13 01:13:14 +02:00
pkg feat: update go-gpt2 (#359) 2023-05-23 21:47:47 +02:00
prompt-templates docs: enhancements (#133) 2023-04-30 23:27:02 +02:00
tests feat: update go-gpt2 (#359) 2023-05-23 21:47:47 +02:00
.dockerignore feat: add image generation with ncnn-stablediffusion (#272) 2023-05-16 19:32:53 +02:00
.env feat: allow to override model config (#323) 2023-05-20 17:03:53 +02:00
.gitignore ci: add binary releases pipelines (#358) 2023-05-23 17:12:48 +02:00
docker-compose.yaml docs: add discord-bot example (#126) 2023-04-30 00:31:28 +02:00
Dockerfile feat: add CuBLAS support in Docker images (#403) 2023-05-29 23:12:27 +02:00
Dockerfile.dev feat: add CuBLAS support in Docker images (#403) 2023-05-29 23:12:27 +02:00
Earthfile Rename project to LocalAI (#35) 2023-04-19 18:43:10 +02:00
entrypoint.sh feat: add an environment variable to manage rebuild in Docker image (#290) 2023-05-18 19:18:32 +02:00
go.mod fix(deps): update github.com/go-skynet/go-ggml-transformers.cpp digest to 13ccc22 (#427) 2023-05-30 11:34:13 +02:00
go.sum fix(deps): update github.com/go-skynet/go-ggml-transformers.cpp digest to 13ccc22 (#427) 2023-05-30 11:34:13 +02:00
LICENSE docs: update docs/license(clarification) and point to new website (#415) 2023-05-29 23:09:19 +02:00
main.go feat: allow to preload models before startup via env var or configs (#391) 2023-05-27 09:26:33 +02:00
Makefile ⬆️ Update go-skynet/go-ggml-transformers.cpp (#421) 2023-05-29 23:10:43 +02:00
README.md docs: fix link (#426) 2023-05-29 23:13:42 +02:00
renovate.json ci: manually update deps 2023-05-04 15:01:29 +02:00



LocalAI

tests build container images

LocalAI is a drop-in replacement REST API thats compatible with OpenAI API specifications for local inferencing. It allows you to run LLMs (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families that are compatible with the ggml format. Does not require GPU.

For a list of the supported model families, please see the model compatibility table.

In a nutshell:

  • Local, OpenAI drop-in alternative REST API. You own your data.
  • NO GPU required. NO Internet access is required either. Optional, GPU Acceleration is available in llama.cpp-compatible LLMs. See building instructions.
  • Supports multiple models, Audio transcription, Text generation with GPTs, Image generation with stable diffusion (experimental)
  • Once loaded the first time, it keep models loaded in memory for faster inference
  • Doesn't shell-out, but uses C++ bindings for a faster inference and better performance.

LocalAI was created by Ettore Di Giacinto and is a community-driven project, focused on making the AI accessible to anyone. Any contribution, feedback and PR is welcome!

ChatGPT OSS alternative Image generation
Screenshot from 2023-04-26 23-59-55 b6441997879

See the Getting started and examples sections to learn how to use LocalAI. For a list of curated models check out the model gallery.

News

For latest news, follow also on Twitter @LocalAI_API and @mudler_it

Contribute and help

To help the project you can:

  • Upvote the Reddit post about LocalAI.

  • Hacker news post - help us out by voting if you like this project.

  • If you have technological skills and want to contribute to development, have a look at the open issues. If you are new you can have a look at the good-first-issue and help-wanted labels.

  • If you don't have technological skills you can still help improving documentation or add examples or share your user-stories with our community, any help and contribution is welcome!

Usage

Check out the Getting started section. Here below you will find generic, quick instructions to get ready and use LocalAI.

The easiest way to run LocalAI is by using docker-compose (to build locally, see building LocalAI):


git clone https://github.com/go-skynet/LocalAI

cd LocalAI

# (optional) Checkout a specific LocalAI tag
# git checkout -b build <TAG>

# copy your models to models/
cp your-model.bin models/

# (optional) Edit the .env file to set things like context size and threads
# vim .env

# start with docker-compose
docker-compose up -d --pull always
# or you can build the images with:
# docker-compose up -d --build

# Now API is accessible at localhost:8080
curl http://localhost:8080/v1/models
# {"object":"list","data":[{"id":"your-model.bin","object":"model"}]}

curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
     "model": "your-model.bin",            
     "prompt": "A long time ago in a galaxy far, far away",
     "temperature": 0.7
   }'

Example: Use GPT4ALL-J model

# Clone LocalAI
git clone https://github.com/go-skynet/LocalAI

cd LocalAI

# (optional) Checkout a specific LocalAI tag
# git checkout -b build <TAG>

# Download gpt4all-j to models/
wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j

# Use a template from the examples
cp -rf prompt-templates/ggml-gpt4all-j.tmpl models/

# (optional) Edit the .env file to set things like context size and threads
# vim .env

# start with docker-compose
docker-compose up -d --pull always
# or you can build the images with:
# docker-compose up -d --build
# Now API is accessible at localhost:8080
curl http://localhost:8080/v1/models
# {"object":"list","data":[{"id":"ggml-gpt4all-j","object":"model"}]}

curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
     "model": "ggml-gpt4all-j",
     "messages": [{"role": "user", "content": "How are you?"}],
     "temperature": 0.9 
   }'

# {"model":"ggml-gpt4all-j","choices":[{"message":{"role":"assistant","content":"I'm doing well, thanks. How about you?"}}]}

Build locally

In order to build the LocalAI container image locally you can use docker:

# build the image
docker build -t localai .
docker run localai

Or you can build the binary with make:

make build

See the build section in our documentation for detailed instructions.

Run LocalAI in Kubernetes

LocalAI can be installed inside Kubernetes with helm. See installation instructions.

Supported API endpoints

See the list of the supported API endpoints and how to configure image generation and audio transcription.

Frequently asked questions

See the FAQ section for a list of common questions.

Projects already using LocalAI to run local models

Feel free to open up a PR to get your project listed!

Short-term roadmap

Star history

LocalAI Star history Chart

License

LocalAI is a community-driven project created by Ettore Di Giacinto.

MIT

Author

Ettore Di Giacinto and others

Acknowledgements

LocalAI couldn't have been built without the help of great software already available from the community. Thank you!

Contributors