Ettore Di Giacinto
6ca4d38a01
docs/examples: enhancements ( #1572 )
...
* docs: re-order sections
* fix references
* Add mixtral-instruct, tinyllama-chat, dolphin-2.5-mixtral-8x7b
* Fix link
* Minor corrections
* fix: models is a StringSlice, not a String
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* WIP: switch docs theme
* content
* Fix GH link
* enhancements
* enhancements
* Fixed how to link
Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>
* fixups
* logo fix
* more fixups
* final touches
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>
Co-authored-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>
2024-01-18 19:41:08 +01:00
Ettore Di Giacinto
db926896bd
Revert "[Refactor]: Core/API Split" ( #1550 )
...
Revert "[Refactor]: Core/API Split (#1506 )"
This reverts commit ab7b4d5ee9
.
2024-01-05 18:04:46 +01:00
Dave
ab7b4d5ee9
[Refactor]: Core/API Split ( #1506 )
...
Refactors api folder to core, creates firm split between backend code and api frontend.
2024-01-05 15:34:56 +01:00
Ettore Di Giacinto
66fa4f1767
feat: share models by url ( #1522 )
...
* feat: allow to pass by models via args
* expose it also as an env/arg
* docs: enhancements to build/requirements
* do not display status always
* print download status
* not all mesages are debug
2024-01-01 10:31:03 +01:00
Ettore Di Giacinto
824612f1b4
feat: initial watchdog implementation ( #1341 )
...
* feat: initial watchdog implementation
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
* fiuxups
* Add more output
* wip: idletime checker
* wire idle watchdog checks
* enlarge watchdog time window
* small fixes
* Use stopmodel
* Always delete process
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-11-26 18:36:23 +01:00
Ettore Di Giacinto
fdd95d1d86
feat: allow to run parallel requests ( #1290 )
...
* feat: allow to run parallel requests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fixup
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-11-16 08:20:05 +01:00
Jesús Espino
e91f660eb1
feat(metrics): Adding initial support for prometheus metrics ( #1176 )
...
* feat(metrics): Adding initial support for prometheus metrics
* Fixing CI
* run go mod tidy
2023-10-17 18:22:53 +02:00
Jesús Espino
8034ed3473
Adding transcript subcommand ( #1171 )
...
Adding the transcript subcommand to the localai binary
This PR is related to #816
2023-10-15 09:17:41 +02:00
Jesús Espino
ab65f3a17d
Addining the tts command line subcommand ( #1169 )
...
This PR adds the tts (Text to Speach) command to the localai binary.
This PR is related to the issue #816
2023-10-14 12:27:35 +02:00
Jesús Espino
8ca671761a
feat(cli): Adding models subcommand with list and install subcommands ( #1165 )
...
Adding subcommands to do certain actions directly from the command line.
I'm starting with the models subcommand allowing you to list models from
your galleries and install them.
This PR partially fixes #816
My intention is to keep adding other subcommands, but I think this is a
good start, and I think this already provides value.
Also, I added a new dependency to generate the progress bar in the
command line, it is not "needed" but I think is a nice to have to have a
cooler interface.
Here is a screenshot:
![imagen](https://github.com/go-skynet/LocalAI/assets/290303/8d8c1bf0-5340-46ce-9362-812694f914cd )
2023-10-12 10:45:34 +02:00
Ettore Di Giacinto
afdc0ebfd7
feat: add --single-active-backend to allow only one backend active at the time ( #925 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-19 01:49:33 +02:00
Dave
8cb1061c11
Usage Features ( #863 )
2023-08-18 21:23:14 +02:00
Michael Nesbitt
1d1cae8e4d
feat: add API_KEY list support ( #877 )
...
Co-authored-by: Harold Sun <sunhua@amazon.com>
2023-08-10 00:06:21 +02:00
Ettore Di Giacinto
94916749c5
feat: add external grpc and model autoloading
2023-07-20 22:10:12 +02:00
Ettore Di Giacinto
1d0ed95a54
feat: move other backends to grpc
...
This finally makes everything more consistent
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 01:19:43 +02:00
Ettore Di Giacinto
5dcfdbe51d
feat: various refactorings
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 01:19:43 +02:00
mudler
b722e7eb7e
feat: cleanups, small enhancements
...
Signed-off-by: mudler <mudler@localai.io>
2023-07-04 18:58:19 +02:00
Ettore Di Giacinto
d3a486a4f8
feat: Add '/version' endpoint and display it in the CLI ( #679 )
2023-06-26 15:12:43 +02:00
Ettore Di Giacinto
60db5957d3
Gallery repository ( #663 )
...
Signed-off-by: mudler <mudler@localai.io>
2023-06-24 08:18:17 +02:00
Ettore Di Giacinto
a7bb029d23
feat: add tts with go-piper ( #649 )
...
Signed-off-by: mudler <mudler@localai.io>
2023-06-22 17:53:10 +02:00
Sébastien Prud'homme
aa6cdf16c8
fix: display help with correct default values ( #481 )
...
Signed-off-by: Sébastien Prud'homme <sebastien.prudhomme@gmail.com>
2023-06-03 14:25:30 +02:00
Ettore Di Giacinto
78ad4813df
feat: Update gpt4all, support multiple implementations in runtime ( #472 )
...
Signed-off-by: mudler <mudler@mocaccino.org>
2023-06-01 23:38:52 +02:00
Ettore Di Giacinto
aacb96df7a
fix: correctly handle errors from App constructor ( #430 )
...
Signed-off-by: mudler <mudler@mocaccino.org>
2023-05-30 12:00:30 +02:00
Ettore Di Giacinto
76c881043e
feat: allow to preload models before startup via env var or configs ( #391 )
2023-05-27 09:26:33 +02:00
Ettore Di Giacinto
043399dd07
fix: re-enable start API message ( #349 )
...
Signed-off-by: mudler <mudler@mocaccino.org>
2023-05-23 00:06:13 +02:00
Ettore Di Giacinto
6f54cab3f0
feat: allow to set cors ( #339 )
2023-05-21 14:38:25 +02:00
Ettore Di Giacinto
cc9aa9eb3f
feat: add /models/apply endpoint to prepare models ( #286 )
2023-05-18 15:59:03 +02:00
Ettore Di Giacinto
9d051c5d4f
feat: add image generation with ncnn-stablediffusion ( #272 )
2023-05-16 19:32:53 +02:00
Ettore Di Giacinto
fd1df4e971
whisper: add tests and allow to set upload size ( #237 )
2023-05-12 10:04:20 +02:00
Ettore Di Giacinto
9497a24127
fix: hardcode default number of cores to '4' ( #186 )
2023-05-04 18:14:58 +02:00
Jeremy Price
b971807980
Looks for models in $CWD/models/ dir by default ( #169 )
2023-05-03 23:03:31 +02:00
Ettore Di Giacinto
c806eae0de
feat: config files and SSE ( #83 )
...
Signed-off-by: mudler <mudler@mocaccino.org>
Signed-off-by: Tyler Gillson <tyler.gillson@gmail.com>
Co-authored-by: Tyler Gillson <tyler.gillson@gmail.com>
2023-04-26 21:18:18 -07:00
Ettore Di Giacinto
1c872ec326
feat: add CI/tests ( #58 )
...
Signed-off-by: mudler <mudler@mocaccino.org>
2023-04-22 00:44:52 +02:00
Ettore Di Giacinto
5cba71de70
Add stopwords, debug mode, and other API enhancements ( #54 )
...
Signed-off-by: mudler <mudler@mocaccino.org>
2023-04-21 19:46:59 +02:00
Ettore Di Giacinto
ed954d66c3
Do not take all CPU by default ( #50 )
2023-04-21 00:55:19 +02:00
Ettore Di Giacinto
f816dfae65
Add support for stablelm ( #48 )
...
Signed-off-by: mudler <mudler@mocaccino.org>
2023-04-21 00:06:55 +02:00
Ettore Di Giacinto
d517a54e28
Major API enhancements ( #44 )
2023-04-20 18:33:02 +02:00
Ettore Di Giacinto
80f50e6ccd
Rename project to LocalAI ( #35 )
...
Signed-off-by: mudler <mudler@c3os.io>
2023-04-19 18:43:10 +02:00
Ettore Di Giacinto
7fec26f5d3
Enhancements ( #34 )
...
Signed-off-by: mudler <mudler@c3os.io>
2023-04-19 17:10:29 +02:00
Ettore Di Giacinto
63601fabd1
feat: drop default model and llama-specific API ( #26 )
...
Signed-off-by: mudler <mudler@c3os.io>
2023-04-16 10:40:50 +02:00
Marc R Kellerman
c37175271f
feature: makefile & updates ( #23 )
...
Co-authored-by: mudler <mudler@c3os.io>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2023-04-15 16:39:07 -07:00
mudler
ae30bd346d
Reorganize repository layout
2023-04-11 23:43:43 +02:00
mudler
48aca246e3
Drop unused interactive mode
2023-04-07 11:31:14 +02:00
mudler
12eee097b7
Make it compatible with openAI api, support multiple models
...
Signed-off-by: mudler <mudler@c3os.io>
2023-04-07 11:30:59 +02:00
mudler
b33d015b8c
Use go-llama.cpp
2023-04-07 10:08:15 +02:00
mudler
650a22aef1
Add compatibility to gpt4all models
2023-03-29 18:53:24 +02:00
mudler
9ba30c9c44
Update llama-go, allow to set context-size and enable alpaca model by default
2023-03-21 19:20:23 +01:00
mudler
2ce1d51ad5
No need to set 0 for default context anymore
2023-03-20 00:12:26 +01:00
mudler
d4720150b5
First import
2023-03-18 23:59:06 +01:00