LocalAI/pkg
Ettore Di Giacinto 548959b50f
feat: queue up requests if not running parallel requests (#1296)
Return a GRPC which handles a lock in case it is not meant to be
parallel.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-11-16 22:20:16 +01:00
..
assets feat: Update gpt4all, support multiple implementations in runtime (#472) 2023-06-01 23:38:52 +02:00
gallery Fix backend/cpp/llama CMakeList.txt on OSX (#1212) 2023-10-25 20:53:26 +02:00
grammar 🔥 add LaVA support and GPT vision API, Multiple requests for llama.cpp, return JSON types (#1254) 2023-11-11 13:14:59 +01:00
grpc feat: queue up requests if not running parallel requests (#1296) 2023-11-16 22:20:16 +01:00
langchain feat: add LangChainGo Huggingface backend (#446) 2023-06-01 12:00:06 +02:00
model feat: queue up requests if not running parallel requests (#1296) 2023-11-16 22:20:16 +01:00
stablediffusion feat: support upscaled image generation with esrgan (#509) 2023-06-05 17:21:38 +02:00
utils feat: Model Gallery Endpoint Refactor / Mutable Galleries Endpoints (#991) 2023-09-02 09:00:44 +02:00