Ettore Di Giacinto
824612f1b4
feat: initial watchdog implementation ( #1341 )
...
* feat: initial watchdog implementation
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
* fiuxups
* Add more output
* wip: idletime checker
* wire idle watchdog checks
* enlarge watchdog time window
* small fixes
* Use stopmodel
* Always delete process
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-11-26 18:36:23 +01:00
Ettore Di Giacinto
548959b50f
feat: queue up requests if not running parallel requests ( #1296 )
...
Return a GRPC which handles a lock in case it is not meant to be
parallel.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-11-16 22:20:16 +01:00
Ettore Di Giacinto
fdd95d1d86
feat: allow to run parallel requests ( #1290 )
...
* feat: allow to run parallel requests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fixup
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-11-16 08:20:05 +01:00
Dave
10b0e13882
feat: backend monitor shutdown endpoint, process based ( #938 )
...
This PR adds a new endpoint to the backend monitor section
`/backend/shutdown` which terminates the grpc process for the related
model.
2023-08-23 18:38:37 +02:00
Ettore Di Giacinto
afdc0ebfd7
feat: add --single-active-backend to allow only one backend active at the time ( #925 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-19 01:49:33 +02:00
Dave
8cb1061c11
Usage Features ( #863 )
2023-08-18 21:23:14 +02:00
Ettore Di Giacinto
a843e64fc2
feat: add initial AutoGPTQ backend implementation
2023-08-07 22:53:28 +02:00
Dave
7fb8b4191f
feat: "simple" chat/edit/completion template system prompt from config ( #856 )
2023-08-03 00:19:55 +02:00
Dave
ce8e9dc690
feature: model list :: filter query string parameter ( #830 )
2023-07-31 19:14:32 +02:00
Aman Gupta Karmani
12fe0932c4
feat: cancel stream generation if client disappears ( #792 )
2023-07-24 23:10:54 +02:00
Dave
c6bf67f446
feat(llama2): add template for chat messages ( #782 )
...
Co-authored-by: Aman Karmani <aman@tmm1.net>
Lays some of the groundwork for LLAMA2 compatibility as well as other future models with complex prompting schemes.
Started small refactoring in pkg/model/loader.go regarding template loading. Currently still a part of ModelLoader, but should be easy to add template loading for situations other than overall prompt templates and the new chat-specific per-message templates
Adds support for new chat-endpoint-specific, per-message templates as an alternative to the existing Role: XYZ sprintf method.
Includes a temporary prompt template as an example, since I have a few questions before we merge in the model-gallery side changes (see )
Minor debug logging changes.
2023-07-22 11:31:39 -04:00
Ettore Di Giacinto
1d0ed95a54
feat: move other backends to grpc
...
This finally makes everything more consistent
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 01:19:43 +02:00
Ettore Di Giacinto
b816009db0
feat: add falcon ggllm via grpc client
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 01:19:43 +02:00
Ettore Di Giacinto
85f0f8227d
refactor: drop code dups ( #234 )
2023-05-11 16:34:16 +02:00
Ettore Di Giacinto
59e3c02002
make use of new bindings for gpt4all ( #232 )
2023-05-11 14:31:19 +02:00
Matthew Campbell
032dee256f
Keep whisper models in memory ( #233 )
2023-05-11 14:05:07 +02:00
Ettore Di Giacinto
11675932ac
feat: add dolly/redpajama/bloomz models support ( #214 )
2023-05-11 01:12:58 +02:00
Ettore Di Giacinto
f8ee20991c
feat: add bert.cpp embeddings ( #222 )
2023-05-10 15:20:21 +02:00
Ettore Di Giacinto
c839b334eb
feat: add embeddings for go-llama.cpp backend ( #190 )
2023-05-05 11:20:06 +02:00
Ettore Di Giacinto
714bfcd45b
fix: missing returning error and free callback stream ( #187 )
2023-05-04 19:49:43 +02:00
Ettore Di Giacinto
751b7eca62
feat: add rwkv support ( #158 )
...
Signed-off-by: mudler <mudler@mocaccino.org>
2023-05-03 11:45:22 +02:00
Ettore Di Giacinto
1ae7150810
feat: allow to specify default backend for model ( #156 )
...
Signed-off-by: mudler <mudler@c3os.io>
2023-05-03 00:31:28 +02:00
Ettore Di Giacinto
156e15a4fa
Bump llama.cpp, downgrade gpt4all-j ( #149 )
2023-05-02 16:07:18 +02:00
Ettore Di Giacinto
92452d46da
feat: add new gpt4all-j binding ( #142 )
2023-05-01 20:00:15 +02:00
Ettore Di Giacinto
c806eae0de
feat: config files and SSE ( #83 )
...
Signed-off-by: mudler <mudler@mocaccino.org>
Signed-off-by: Tyler Gillson <tyler.gillson@gmail.com>
Co-authored-by: Tyler Gillson <tyler.gillson@gmail.com>
2023-04-26 21:18:18 -07:00
Ettore Di Giacinto
f816dfae65
Add support for stablelm ( #48 )
...
Signed-off-by: mudler <mudler@mocaccino.org>
2023-04-21 00:06:55 +02:00
Ettore Di Giacinto
1c4fbaae20
Add support for cerebras ( #45 )
...
Signed-off-by: mudler <mudler@c3os.io>
2023-04-20 19:33:36 +02:00
Ettore Di Giacinto
d517a54e28
Major API enhancements ( #44 )
2023-04-20 18:33:02 +02:00
Ettore Di Giacinto
7fec26f5d3
Enhancements ( #34 )
...
Signed-off-by: mudler <mudler@c3os.io>
2023-04-19 17:10:29 +02:00
mudler
5556aa46dd
Small refinements and refactors
2023-04-12 00:02:39 +02:00
mudler
ae30bd346d
Reorganize repository layout
2023-04-11 23:43:43 +02:00