LocalAI/api
Ettore Di Giacinto 548959b50f
feat: queue up requests if not running parallel requests (#1296)
Return a GRPC which handles a lock in case it is not meant to be
parallel.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-11-16 22:20:16 +01:00
..
backend feat: allow to run parallel requests (#1290) 2023-11-16 08:20:05 +01:00
config feat(llama.cpp): support lora with scale and yarn (#1277) 2023-11-11 18:40:48 +01:00
localai feat: queue up requests if not running parallel requests (#1296) 2023-11-16 22:20:16 +01:00
openai fix: respect OpenAI spec for response format (#1289) 2023-11-15 19:36:23 +01:00
options feat: allow to run parallel requests (#1290) 2023-11-16 08:20:05 +01:00
schema fix: respect OpenAI spec for response format (#1289) 2023-11-15 19:36:23 +01:00
api_test.go ci: enlarge download timeout window 2023-10-29 22:09:35 +01:00
api.go feat(metrics): Adding initial support for prometheus metrics (#1176) 2023-10-17 18:22:53 +02:00
apt_suite_test.go feat: add CI/tests (#58) 2023-04-22 00:44:52 +02:00