🤖 The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs ggml, gguf, GPTQ, onnx, TF compatible models: llama, llama2, rwkv, whisper, vicuna, koala, cerebras, falcon, dolly, starcoder, and many others
Go to file
renovate[bot] 1a4c57fac2
fix(deps): update module google.golang.org/grpc to v1.58.3 (#1160)
[![Mend
Renovate](https://app.renovatebot.com/images/banner.svg)](https://renovatebot.com)

This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [google.golang.org/grpc](https://togithub.com/grpc/grpc-go) | require
| patch | `v1.58.2` -> `v1.58.3` |

---

### Release Notes

<details>
<summary>grpc/grpc-go (google.golang.org/grpc)</summary>

### [`v1.58.3`](https://togithub.com/grpc/grpc-go/releases/tag/v1.58.3)

[Compare
Source](https://togithub.com/grpc/grpc-go/compare/v1.58.2...v1.58.3)

### Security

- server: prohibit more than MaxConcurrentStreams handlers from running
at once (CVE-2023-44487)

In addition to this change, applications should ensure they do not leave
running tasks behind related to the RPC before returning from method
handlers, or should enforce appropriate limits on any such work.

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR has been generated by [Mend
Renovate](https://www.mend.io/free-developer-tools/renovate/). View
repository job log
[here](https://developer.mend.io/github/go-skynet/LocalAI).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy44LjEiLCJ1cGRhdGVkSW5WZXIiOiIzNy44LjEiLCJ0YXJnZXRCcmFuY2giOiJtYXN0ZXIifQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-10-11 18:18:32 +02:00
.github feat(vllm): Allow to set quantization (#1094) 2023-09-22 15:52:38 +02:00
.vscode feat: Add more test-cases and remove dev container (#433) 2023-05-30 13:01:55 +02:00
api feat(vllm): Allow to set quantization (#1094) 2023-09-22 15:52:38 +02:00
cmd/grpc feat: add llama-stable backend (#932) 2023-08-20 16:35:42 +02:00
examples Feats: bruno example, gallery improvements for new scraper (#1161) 2023-10-11 18:18:12 +02:00
extra feat(vllm): Allow to set quantization (#1094) 2023-09-22 15:52:38 +02:00
internal feat: cleanups, small enhancements 2023-07-04 18:58:19 +02:00
models
pkg Feats: bruno example, gallery improvements for new scraper (#1161) 2023-10-11 18:18:12 +02:00
prompt-templates Requested Changes from GPT4ALL to Luna-AI-Llama2 (#1092) 2023-09-22 11:22:17 +02:00
tests feat(llama2): add template for chat messages (#782) 2023-07-22 11:31:39 -04:00
.dockerignore Remove .git from .dockerignore 2023-07-06 21:25:10 +02:00
.env feat(python-grpc): allow to set max workers with PYTHON_GRPC_MAX_WORKERS (#1081) 2023-09-19 21:30:39 +02:00
.gitattributes Create .gitattributes to force git clone to keep the LF line endings on .sh files (#838) 2023-07-30 15:27:43 +02:00
.gitignore Feat: rwkv improvements: (#937) 2023-08-22 18:48:06 +02:00
assets.go feat: Update gpt4all, support multiple implementations in runtime (#472) 2023-06-01 23:38:52 +02:00
CONTRIBUTING.md Add the CONTRIBUTING.md (#1098) 2023-09-24 14:54:55 +02:00
docker-compose.yaml fix: update docker-compose.yaml (#1131) 2023-10-05 22:13:18 +02:00
Dockerfile fix(vall-e-x): copy vall-e-x next to the local-ai binary in the container image (#1082) 2023-09-19 21:30:51 +02:00
Earthfile Rename project to LocalAI (#35) 2023-04-19 18:43:10 +02:00
entrypoint.sh Added CPU information to entrypoint.sh (#794) 2023-07-23 19:27:55 +00:00
go.mod fix(deps): update module google.golang.org/grpc to v1.58.3 (#1160) 2023-10-11 18:18:32 +02:00
go.sum fix(deps): update module google.golang.org/grpc to v1.58.3 (#1160) 2023-10-11 18:18:32 +02:00
LICENSE docs: update docs/license(clarification) and point to new website (#415) 2023-05-29 23:09:19 +02:00
main.go feat: add --single-active-backend to allow only one backend active at the time (#925) 2023-08-19 01:49:33 +02:00
Makefile ⬆️ Update go-skynet/go-llama.cpp (#1136) 2023-10-05 17:35:21 +02:00
README.md Requested Changes from GPT4ALL to Luna-AI-Llama2 (#1092) 2023-09-22 11:22:17 +02:00
renovate.json ci: manually update deps 2023-05-04 15:01:29 +02:00



LocalAI

LocalAI forks LocalAI stars LocalAI pull-requests

💡 Get help - FAQ 💭Discussions 💬 Discord 📖 Documentation website

💻 Quickstart 📣 News 🛫 Examples 🖼️ Models

testsBuild and Releasebuild container imagesBump dependenciesArtifact Hub

LocalAI is a drop-in replacement REST API that's compatible with OpenAI API specifications for local inferencing. It allows you to run LLMs (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families that are compatible with the ggml format, pytorch and more. Does not require GPU.

Follow LocalAI

Follow LocalAI_API Join LocalAI Discord Community

Connect with the Creator

Follow mudler_it Follow on Github

Share LocalAI Repository

Follow _LocalAI Share on Telegram Share on Reddit Buy Me A Coffee


In a nutshell:

  • Local, OpenAI drop-in alternative REST API. You own your data.
  • NO GPU required. NO Internet access is required either
    • Optional, GPU Acceleration is available in llama.cpp-compatible LLMs. See also the build section.
  • Supports multiple models
  • 🏃 Once loaded the first time, it keep models loaded in memory for faster inference
  • Doesn't shell-out, but uses C++ bindings for a faster inference and better performance.

LocalAI was created by Ettore Di Giacinto and is a community-driven project, focused on making the AI accessible to anyone. Any contribution, feedback and PR is welcome!

Note that this started just as a fun weekend project in order to try to create the necessary pieces for a full AI assistant like ChatGPT: the community is growing fast and we are working hard to make it better and more stable. If you want to help, please consider contributing (see below)!

🔥🔥 Hot topics / Roadmap

🚀 Features

📖 🎥 Media, Blogs, Social

💻 Usage

Check out the Getting started section in our documentation.

💡 Example: Use Luna-AI Llama model

See the documentation

🔗 Resources

Citation

If you utilize this repository, data in a downstream project, please consider citing it with:

@misc{localai,
  author = {Ettore Di Giacinto},
  title = {LocalAI: The free, Open source OpenAI alternative},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/go-skynet/LocalAI}},

❤️ Sponsors

Do you find LocalAI useful?

Support the project by becoming a backer or sponsor. Your logo will show up here with a link to your website.

A huge thank you to our generous sponsors who support this project:

Spectro Cloud logo_600x600px_transparent bg
Spectro Cloud
Spectro Cloud kindly supports LocalAI by providing GPU and computing resources to run tests on lamdalabs!

And a huge shout-out to individuals sponsoring the project by donating hardware or backing the project.

🌟 Star history

LocalAI Star history Chart

📖 License

LocalAI is a community-driven project created by Ettore Di Giacinto.

MIT - Author Ettore Di Giacinto

🙇 Acknowledgements

LocalAI couldn't have been built without the help of great software already available from the community. Thank you!

🤗 Contributors

This is a community project, a special thanks to our contributors! 🤗