fe1b54b713
[![Mend Renovate](https://app.renovatebot.com/images/banner.svg)](https://renovatebot.com) This PR contains the following updates: | Package | Type | Update | Change | |---|---|---|---| | [github.com/gofiber/fiber/v2](https://togithub.com/gofiber/fiber) | require | minor | `v2.48.0` -> `v2.49.0` | --- ### Release Notes <details> <summary>gofiber/fiber (github.com/gofiber/fiber/v2)</summary> ### [`v2.49.0`](https://togithub.com/gofiber/fiber/releases/tag/v2.49.0) [Compare Source](https://togithub.com/gofiber/fiber/compare/v2.48.0...v2.49.0) #### ❗ Breaking Changes - Add config to enable splitting by comma in parsers ([#​2560](https://togithub.com/gofiber/fiber/issues/2560)) https://docs.gofiber.io/api/fiber#config > EnableSplittingOnParsers splits the query/body/header parameters by comma when it's true (default: false). > > For example, you can use it to parse multiple values from a query parameter like this: > /api?foo=bar,baz == foo\[]=bar\&foo\[]=baz #### 🚀 New - Add custom data property to favicon middleware config ([#​2579](https://togithub.com/gofiber/fiber/issues/2579)) https://docs.gofiber.io/api/middleware/favicon#config > This allows the user to use //go:embed flags to load favicon data during build-time, and supply it to the middleware instead of reading the file every time the application starts. #### 🧹 Updates - Middleware/logger: Latency match gin-gonic/gin formatter ([#​2569](https://togithub.com/gofiber/fiber/issues/2569)) - Middleware/filesystem: Refactor: use `errors.Is` instead of `os.IsNotExist` ([#​2558](https://togithub.com/gofiber/fiber/issues/2558)) - Use Global vars instead of local vars for isLocalHost ([#​2595](https://togithub.com/gofiber/fiber/issues/2595)) - Remove redundant nil check ([#​2584](https://togithub.com/gofiber/fiber/issues/2584)) - Bump github.com/mattn/go-runewidth from 0.0.14 to 0.0.15 ([#​2551](https://togithub.com/gofiber/fiber/issues/2551)) - Bump github.com/google/uuid from 1.3.0 to 1.3.1 ([#​2592](https://togithub.com/gofiber/fiber/issues/2592)) - Bump golang.org/x/sys from 0.10.0 to 0.11.0 ([#​2563](https://togithub.com/gofiber/fiber/issues/2563)) - Add go 1.21 to ci and readmes ([#​2588](https://togithub.com/gofiber/fiber/issues/2588)) #### 🐛 Fixes - Middleware/logger: Default latency output format ([#​2580](https://togithub.com/gofiber/fiber/issues/2580)) - Decompress request body when multi Content-Encoding sent on request headers ([#​2555](https://togithub.com/gofiber/fiber/issues/2555)) #### 📚 Documentation - Fix wrong JSON docs ([#​2554](https://togithub.com/gofiber/fiber/issues/2554)) - Update io/ioutil package to io package ([#​2589](https://togithub.com/gofiber/fiber/issues/2589)) - Replace EG flag with the proper and smaller SVG ([#​2585](https://togithub.com/gofiber/fiber/issues/2585)) - Added Egyptian Arabic readme file ([#​2565](https://togithub.com/gofiber/fiber/issues/2565)) - Translate README to Portuguese ([#​2567](https://togithub.com/gofiber/fiber/issues/2567)) - Improve \*fiber.Client section ([#​2553](https://togithub.com/gofiber/fiber/issues/2553)) - Improved the config section of the middleware readme´s ([#​2552](https://togithub.com/gofiber/fiber/issues/2552)) - Added documentation about ctx Fresh ([#​2549](https://togithub.com/gofiber/fiber/issues/2549)) - Update intro.md ([#​2550](https://togithub.com/gofiber/fiber/issues/2550)) - Fixed link to slim template engine ([#​2547](https://togithub.com/gofiber/fiber/issues/2547)) **Full Changelog**: https://github.com/gofiber/fiber/compare/v2.48.0...v2.49.0 Thank you [@​Jictyvoo](https://togithub.com/Jictyvoo), [@​Juneezee](https://togithub.com/Juneezee), [@​Kirari04](https://togithub.com/Kirari04), [@​LimJiAn](https://togithub.com/LimJiAn), [@​PassTheMayo](https://togithub.com/PassTheMayo), [@​andersonmiranda-com](https://togithub.com/andersonmiranda-com), [@​bigpreshy](https://togithub.com/bigpreshy), [@​efectn](https://togithub.com/efectn), [@​renanbastos93](https://togithub.com/renanbastos93), [@​scandar](https://togithub.com/scandar), [@​sixcolors](https://togithub.com/sixcolors) and [@​stefanb](https://togithub.com/stefanb) for making this update possible. </details> --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR has been generated by [Mend Renovate](https://www.mend.io/free-developer-tools/renovate/). View repository job log [here](https://developer.mend.io/github/go-skynet/LocalAI). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNi42NC44IiwidXBkYXRlZEluVmVyIjoiMzYuNjQuOCIsInRhcmdldEJyYW5jaCI6Im1hc3RlciJ9--> Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> |
||
---|---|---|
.github | ||
.vscode | ||
api | ||
cmd/grpc | ||
examples | ||
extra | ||
internal | ||
models | ||
pkg | ||
prompt-templates | ||
tests | ||
.dockerignore | ||
.env | ||
.gitattributes | ||
.gitignore | ||
assets.go | ||
docker-compose.yaml | ||
Dockerfile | ||
Earthfile | ||
entrypoint.sh | ||
go.mod | ||
go.sum | ||
LICENSE | ||
main.go | ||
Makefile | ||
README.md | ||
renovate.json |
LocalAI
💡 Get help - ❓FAQ 💭Discussions 💬 Discord 📖 Documentation website
LocalAI is a drop-in replacement REST API that's compatible with OpenAI API specifications for local inferencing. It allows you to run LLMs (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families that are compatible with the ggml format. Does not require GPU.
Follow LocalAI
Connect with the Creator
Share LocalAI Repository
In a nutshell:
- Local, OpenAI drop-in alternative REST API. You own your data.
- NO GPU required. NO Internet access is required either
- Optional, GPU Acceleration is available in
llama.cpp
-compatible LLMs. See also the build section.
- Optional, GPU Acceleration is available in
- Supports multiple models
- 🏃 Once loaded the first time, it keep models loaded in memory for faster inference
- ⚡ Doesn't shell-out, but uses C++ bindings for a faster inference and better performance.
LocalAI was created by Ettore Di Giacinto and is a community-driven project, focused on making the AI accessible to anyone. Any contribution, feedback and PR is welcome!
Note that this started just as a fun weekend project in order to try to create the necessary pieces for a full AI assistant like ChatGPT
: the community is growing fast and we are working hard to make it better and more stable. If you want to help, please consider contributing (see below)!
🔥🔥 Hot topics / Roadmap
🚀 Features
- 📖 Text generation with GPTs (
llama.cpp
,gpt4all.cpp
, ... 📖 and more) - 🗣 Text to Audio
- 🔈 Audio to Text (Audio transcription with
whisper.cpp
) - 🎨 Image generation with stable diffusion
- 🔥 OpenAI functions 🆕
- 🧠 Embeddings generation for vector databases
- ✍️ Constrained grammars
- 🖼️ Download Models directly from Huggingface
📖 🎥 Media, Blogs, Social
- Create a slackbot for teams and OSS projects that answer to documentation
- LocalAI meets k8sgpt
- Question Answering on Documents locally with LangChain, LocalAI, Chroma, and GPT4All
- Tutorial to use k8sgpt with LocalAI
💻 Usage
Check out the Getting started section in our documentation.
💡 Example: Use GPT4ALL-J model
See the documentation
🔗 Resources
❤️ Sponsors
Do you find LocalAI useful?
Support the project by becoming a backer or sponsor. Your logo will show up here with a link to your website.
A huge thank you to our generous sponsors who support this project:
Spectro Cloud |
Spectro Cloud kindly supports LocalAI by providing GPU and computing resources to run tests on lamdalabs! |
🌟 Star history
📖 License
LocalAI is a community-driven project created by Ettore Di Giacinto.
MIT - Author Ettore Di Giacinto
🙇 Acknowledgements
LocalAI couldn't have been built without the help of great software already available from the community. Thank you!
- llama.cpp
- https://github.com/tatsu-lab/stanford_alpaca
- https://github.com/cornelk/llama-go for the initial ideas
- https://github.com/antimatter15/alpaca.cpp
- https://github.com/EdVince/Stable-Diffusion-NCNN
- https://github.com/ggerganov/whisper.cpp
- https://github.com/saharNooby/rwkv.cpp
- https://github.com/rhasspy/piper
- https://github.com/cmp-nct/ggllm.cpp
🤗 Contributors
This is a community project, a special thanks to our contributors! 🤗