🤖 The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs ggml, gguf, GPTQ, onnx, TF compatible models: llama, llama2, rwkv, whisper, vicuna, koala, cerebras, falcon, dolly, starcoder, and many others
Go to file
Max Cohen f9d2bd24eb
Allow to manually set the seed for the SD pipeline (#998)
**Description**

Enable setting the seed for the stable diffusion pipeline. This is done
through an additional `seed` parameter in the request, such as:

```bash
curl http://localhost:8080/v1/images/generations \
    -H "Content-Type: application/json" \
    -d '{"model": "stablediffusion", "prompt": "prompt", "n": 1, "step": 51, "size": "512x512", "seed": 3}'
```

**Notes for Reviewers**
When the `seed` parameter is not sent, `request.seed` defaults to `0`,
making it difficult to detect an actual seed of `0`. Is there a way to
change the default to `-1` for instance ?

**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.
 

<!--
Thank you for contributing to LocalAI! 

Contributing Conventions:

1. Include descriptive PR titles with [<component-name>] prepended.
2. Build and test your changes before submitting a PR. 
3. Sign your commits

By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
-->
2023-09-04 19:10:55 +02:00
.github feat: bump llama.cpp, add gguf support (#943) 2023-08-24 01:18:58 +02:00
.vscode feat: Add more test-cases and remove dev container (#433) 2023-05-30 13:01:55 +02:00
api feat: Model Gallery Endpoint Refactor / Mutable Galleries Endpoints (#991) 2023-09-02 09:00:44 +02:00
cmd/grpc feat: add llama-stable backend (#932) 2023-08-20 16:35:42 +02:00
examples feat: Model Gallery Endpoint Refactor / Mutable Galleries Endpoints (#991) 2023-09-02 09:00:44 +02:00
extra Allow to manually set the seed for the SD pipeline (#998) 2023-09-04 19:10:55 +02:00
internal feat: cleanups, small enhancements 2023-07-04 18:58:19 +02:00
models
pkg feat: Model Gallery Endpoint Refactor / Mutable Galleries Endpoints (#991) 2023-09-02 09:00:44 +02:00
prompt-templates feat(llama2): add template for chat messages (#782) 2023-07-22 11:31:39 -04:00
tests feat(llama2): add template for chat messages (#782) 2023-07-22 11:31:39 -04:00
.dockerignore Remove .git from .dockerignore 2023-07-06 21:25:10 +02:00
.env docs: base-Update comments in .env for cublas, openblas, clblas (#867) 2023-08-07 08:22:42 +00:00
.gitattributes Create .gitattributes to force git clone to keep the LF line endings on .sh files (#838) 2023-07-30 15:27:43 +02:00
.gitignore Feat: rwkv improvements: (#937) 2023-08-22 18:48:06 +02:00
assets.go feat: Update gpt4all, support multiple implementations in runtime (#472) 2023-06-01 23:38:52 +02:00
docker-compose.yaml images: cleanup, drop .dev Dockerfile (#437) 2023-05-30 15:58:10 +02:00
Dockerfile feat: bump llama.cpp, add gguf support (#943) 2023-08-24 01:18:58 +02:00
Earthfile
entrypoint.sh Added CPU information to entrypoint.sh (#794) 2023-07-23 19:27:55 +00:00
go.mod fix(deps): update github.com/go-skynet/go-llama.cpp digest to d8c8547 (#997) 2023-09-02 12:31:12 +00:00
go.sum fix(deps): update github.com/go-skynet/go-llama.cpp digest to d8c8547 (#997) 2023-09-02 12:31:12 +00:00
LICENSE
main.go feat: add --single-active-backend to allow only one backend active at the time (#925) 2023-08-19 01:49:33 +02:00
Makefile ⬆️ Update go-skynet/go-llama.cpp (#1002) 2023-09-03 10:00:01 +02:00
README.md readme: link to hot topics in the website 2023-08-07 00:31:46 +02:00
renovate.json



LocalAI

LocalAI forks LocalAI stars LocalAI pull-requests

💡 Get help - FAQ 💭Discussions 💬 Discord 📖 Documentation website

💻 Quickstart 📣 News 🛫 Examples 🖼️ Models

testsBuild and Releasebuild container imagesBump dependenciesArtifact Hub

LocalAI is a drop-in replacement REST API that's compatible with OpenAI API specifications for local inferencing. It allows you to run LLMs (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families that are compatible with the ggml format. Does not require GPU.

Follow LocalAI

Follow LocalAI_API Join LocalAI Discord Community

Connect with the Creator

Follow mudler_it Follow on Github

Share LocalAI Repository

Follow _LocalAI Share on Telegram Share on Reddit Buy Me A Coffee


In a nutshell:

  • Local, OpenAI drop-in alternative REST API. You own your data.
  • NO GPU required. NO Internet access is required either
    • Optional, GPU Acceleration is available in llama.cpp-compatible LLMs. See also the build section.
  • Supports multiple models
  • 🏃 Once loaded the first time, it keep models loaded in memory for faster inference
  • Doesn't shell-out, but uses C++ bindings for a faster inference and better performance.

LocalAI was created by Ettore Di Giacinto and is a community-driven project, focused on making the AI accessible to anyone. Any contribution, feedback and PR is welcome!

Note that this started just as a fun weekend project in order to try to create the necessary pieces for a full AI assistant like ChatGPT: the community is growing fast and we are working hard to make it better and more stable. If you want to help, please consider contributing (see below)!

🔥🔥 Hot topics / Roadmap

🚀 Features

📖 🎥 Media, Blogs, Social

💻 Usage

Check out the Getting started section in our documentation.

💡 Example: Use GPT4ALL-J model

See the documentation

🔗 Resources

❤️ Sponsors

Do you find LocalAI useful?

Support the project by becoming a backer or sponsor. Your logo will show up here with a link to your website.

A huge thank you to our generous sponsors who support this project:

Spectro Cloud logo_600x600px_transparent bg
Spectro Cloud
Spectro Cloud kindly supports LocalAI by providing GPU and computing resources to run tests on lamdalabs!

🌟 Star history

LocalAI Star history Chart

📖 License

LocalAI is a community-driven project created by Ettore Di Giacinto.

MIT - Author Ettore Di Giacinto

🙇 Acknowledgements

LocalAI couldn't have been built without the help of great software already available from the community. Thank you!

🤗 Contributors

This is a community project, a special thanks to our contributors! 🤗