Commit Graph

914 Commits

Author SHA1 Message Date
LocalAI [bot]
4e16bc2f13
⬆️ Update ggerganov/llama.cpp (#1256)
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-11-08 08:21:12 +01:00
LocalAI [bot]
562ac62f59
⬆️ Update ggerganov/llama.cpp (#1242)
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-11-07 08:37:55 +01:00
Diego
e7fa2e06f8
Fixes the bug 1196 (#1232)
* Current state of the branch.

* Now gRPC is build only when the BUILD_GRPC_FOR_BACKEND_LLAMA variable is defined.

* Now the local compilation of gRPC is executed on BUILD_GRPC_FOR_BACKEND_LLAMA.

* Revised the Makefile.

* Removed replace directives in go.mod.

---------

Signed-off-by: Diego <38375572+diego-minguzzi@users.noreply.github.com>
Co-authored-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2023-11-06 19:07:46 +01:00
Ettore Di Giacinto
8123f009d0 dockerfile: fixup duplicate
This should have been "exllama"

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-11-05 14:09:31 +01:00
Ettore Di Giacinto
622aaa9f7d dockerfile: avoid pushing a big layer
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-11-05 10:31:33 +01:00
Ettore Di Giacinto
7b1ee203ce tests: re-add flake-attempts
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-11-05 09:01:03 +01:00
Ettore Di Giacinto
f347e51927
feat(conda): conda environments (#1144)
* feat(autogptq): add a separate conda environment for autogptq (#1137)

**Description**

This PR related to #1117

**Notes for Reviewers**

Here we lock down the version of the dependencies. Make sure it can be
used all the time without failed if the version of dependencies were
upgraded.

I change the order of importing packages according to the pylint, and no
change the logic of code. It should be ok.

I will do more investigate on writing some test cases for every backend.
I can run the service in my environment, but there is not exist a way to
test it. So, I am not confident on it.

Add a README.md in the `grpc` root. This is the common commands for
creating `conda` environment. And it can be used to the reference file
for creating extral gRPC backend document.

Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* [Extra backend] Add seperate environment for ttsbark (#1141)

**Description**

This PR relates to #1117

**Notes for Reviewers**

Same to the latest PR:
* The code is also changed, but only the order of the import package
parts. And some code comments are also added.
* Add a configuration of the `conda` environment
* Add a simple test case for testing if the service can be startup in
current `conda` environment. It is succeed in VSCode, but the it is not
out of box on terminal. So, it is hard to say the test case really
useful.

**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.

<!--
Thank you for contributing to LocalAI!

Contributing Conventions
-------------------------

The draft above helps to give a quick overview of your PR.

Remember to remove this comment and to at least:

1. Include descriptive PR titles with [<component-name>] prepended. We
use [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/).
2. Build and test your changes before submitting a PR (`make build`).
3. Sign your commits
4. **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below).
5. **X/Twitter handle:** we announce bigger features on X/Twitter. If
your PR gets announced, and you'd like a mention, we'll gladly shout you
out!

By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.

If no one reviews your PR within a few days, please @-mention @mudler.
-->

Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* feat(conda): add make target and entrypoints for the dockerfile

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* feat(conda): Add seperate conda env for diffusers (#1145)

**Description**

This PR relates to  #1117

**Notes for Reviewers**

* Add `conda` env `diffusers.yml`
* Add Makefile to create it automatically
* Add `run.sh` to support running as a extra backend
  * Also adding it to the main Dockerfile
* Add make command in the root Makefile
* Testing the server, it can start up under the env

Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* feat(conda):Add seperate env for vllm (#1148)

**Description**

This PR is related to #1117

**Notes for Reviewers**

* The gRPC server can be started as normal
* The test case can be triggered in VSCode
* Same to other this kind of PRs, add `vllm.yml` Makefile and add
`run.sh` to the main Dockerfile, and command to the main Makefile

**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.

<!--
Thank you for contributing to LocalAI!

Contributing Conventions
-------------------------

The draft above helps to give a quick overview of your PR.

Remember to remove this comment and to at least:

1. Include descriptive PR titles with [<component-name>] prepended. We
use [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/).
2. Build and test your changes before submitting a PR (`make build`).
3. Sign your commits
4. **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below).
5. **X/Twitter handle:** we announce bigger features on X/Twitter. If
your PR gets announced, and you'd like a mention, we'll gladly shout you
out!

By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.

If no one reviews your PR within a few days, please @-mention @mudler.
-->

Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* feat(conda):Add seperate env for huggingface (#1146)

**Description**

This PR is related to  #1117

**Notes for Reviewers**

* Add conda env `huggingface.yml`
* Change the import order, and also remove the no-used packages
* Add `run.sh` and `make command` to the main Dockerfile and Makefile
* Add test cases for it. It can be triggered and succeed under VSCode
Python extension but it is hang by using `python -m unites
test_huggingface.py` in the terminal

```
Running tests (unittest): /workspaces/LocalAI/extra/grpc/huggingface
Running tests: /workspaces/LocalAI/extra/grpc/huggingface/test_huggingface.py::TestBackendServicer::test_embedding
/workspaces/LocalAI/extra/grpc/huggingface/test_huggingface.py::TestBackendServicer::test_load_model
/workspaces/LocalAI/extra/grpc/huggingface/test_huggingface.py::TestBackendServicer::test_server_startup
./test_huggingface.py::TestBackendServicer::test_embedding Passed

./test_huggingface.py::TestBackendServicer::test_load_model Passed

./test_huggingface.py::TestBackendServicer::test_server_startup Passed

Total number of tests expected to run: 3
Total number of tests run: 3
Total number of tests passed: 3
Total number of tests failed: 0
Total number of tests failed with errors: 0
Total number of tests skipped: 0

Finished running tests!
```

**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.

<!--
Thank you for contributing to LocalAI!

Contributing Conventions
-------------------------

The draft above helps to give a quick overview of your PR.

Remember to remove this comment and to at least:

1. Include descriptive PR titles with [<component-name>] prepended. We
use [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/).
2. Build and test your changes before submitting a PR (`make build`).
3. Sign your commits
4. **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below).
5. **X/Twitter handle:** we announce bigger features on X/Twitter. If
your PR gets announced, and you'd like a mention, we'll gladly shout you
out!

By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.

If no one reviews your PR within a few days, please @-mention @mudler.
-->

Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* feat(conda): Add the seperate conda env for VALL-E X (#1147)

**Description**

This PR is related  to #1117

**Notes for Reviewers**

* The gRPC server cannot start up

```
(ttsvalle) @Aisuko ➜ /workspaces/LocalAI (feat/vall-e-x) $ /opt/conda/envs/ttsvalle/bin/python /workspaces/LocalAI/extra/grpc/vall-e-x/ttsvalle.py
Traceback (most recent call last):
  File "/workspaces/LocalAI/extra/grpc/vall-e-x/ttsvalle.py", line 14, in <module>
    from utils.generation import SAMPLE_RATE, generate_audio, preload_models
ModuleNotFoundError: No module named 'utils'
```

The installation steps follow
https://github.com/Plachtaa/VALL-E-X#-installation below:

* Under the `ttsvalle` conda env

```
git clone https://github.com/Plachtaa/VALL-E-X.git
cd VALL-E-X
pip install -r requirements.txt
```

**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.

<!--
Thank you for contributing to LocalAI!

Contributing Conventions
-------------------------

The draft above helps to give a quick overview of your PR.

Remember to remove this comment and to at least:

1. Include descriptive PR titles with [<component-name>] prepended. We
use [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/).
2. Build and test your changes before submitting a PR (`make build`).
3. Sign your commits
4. **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below).
5. **X/Twitter handle:** we announce bigger features on X/Twitter. If
your PR gets announced, and you'd like a mention, we'll gladly shout you
out!

By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.

If no one reviews your PR within a few days, please @-mention @mudler.
-->

Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix: set image type

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* feat(conda):Add seperate conda env for exllama (#1149)

Add seperate env for exllama

Signed-off-by: Aisuko <urakiny@gmail.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Setup conda

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Set image_type arg

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* ci: prepare only conda env in tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Dockerfile: comment manual pip calls

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* conda: add conda to PATH

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fixes

* add shebang

* Fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* file perms

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* debug

* Install new conda in the worker

* Disable GPU tests for now until the worker is back

* Rename workflows

* debug

* Fixup conda install

* fixup(wrapper): pass args

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Aisuko <urakiny@gmail.com>
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Aisuko <urakiny@gmail.com>
2023-11-04 15:30:32 +01:00
LocalAI [bot]
9b17af18b3
⬆️ Update ggerganov/llama.cpp (#1236)
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-11-03 19:23:53 +01:00
Samuel Walker
23c7fbfe6b
chianlit example (#1238) 2023-11-02 22:56:46 +01:00
Samuel Walker
035fea676a
llama index example (#1237) 2023-11-02 13:35:06 -07:00
Vitor Oliveira
6e1a234d15
feat(certificates): add support for custom CA certificates (#880)
This change facilitates users working behind corporate firewalls or proxies. By allowing the integration of custom CA certificates, users can handle SSL connections that are intercepted by company infrastructure.
2023-11-01 20:10:14 +01:00
LocalAI [bot]
5b596ea605
⬆️ Update ggerganov/llama.cpp (#1231)
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-11-01 12:44:34 +00:00
Dave
6bd56460de
Update .gitignore for backend/llama.cpp (#1235)
Signed-off-by: Dave <dave@gray101.com>
2023-11-01 09:52:02 +01:00
LocalAI [bot]
6ef7ea2635
⬆️ Update ggerganov/llama.cpp (#1207)
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-10-30 08:00:36 +00:00
Ettore Di Giacinto
f8c00fbaf1 ci: enlarge download timeout window
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-10-29 22:09:35 +01:00
Ettore Di Giacinto
d9a42cc4c5
ci: run only cublas on selfhosted (#1224)
* ci: run only cublas on selfhosted

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* debug

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* update git

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* change testing embeddings model link

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-10-29 22:04:43 +01:00
Ettore Di Giacinto
fc0bc32814
ci: use self-hosted to build container images (#1206)
ci: use self-hosted

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2023-10-26 21:13:40 +02:00
Ettore Di Giacinto
c62504ac92
cleanup: drop bloomz and ggllm as now supported by llama.cpp (#1217)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-10-26 07:43:31 +02:00
Ettore Di Giacinto
f227e918f9
feat(llama.cpp): Bump llama.cpp, adapt grpc server (#1211)
* feat(llama.cpp): Bump llama.cpp, adapt grpc server

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* ci: fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-10-25 20:56:25 +02:00
Ettore Di Giacinto
c132dbadce
docs(examples): Add mistral example (#1214)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-10-25 20:56:12 +02:00
Dave
b839eb80a1
Fix backend/cpp/llama CMakeList.txt on OSX (#1212)
* Fix backend/cpp/llama CMakeList.txt on OSX - detect OSX and use homebrew libraries

* sneak a logging fix in too for gallery debugging

* additional logging
2023-10-25 20:53:26 +02:00
renovate[bot]
23b03a7f03
fix(deps): update module github.com/onsi/gomega to v1.28.1 (#1205)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-10-24 09:16:02 +02:00
LocalAI [bot]
9196583651
⬆️ Update ggerganov/llama.cpp (#1204)
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-10-23 19:06:39 +02:00
Ettore Di Giacinto
fd28252e55 fix(Dockerfile): try to save some space
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-10-22 17:13:39 +02:00
renovate[bot]
94f20e2eb7
fix(deps): update github.com/nomic-ai/gpt4all/gpt4all-bindings/golang digest to c25dc51 (#1191)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-10-22 16:58:45 +02:00
Ettore Di Giacinto
5ced99a8e7
ci: more cleanup for workers
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2023-10-22 12:27:04 +02:00
LocalAI [bot]
c377e61ff0
⬆️ Update go-skynet/go-llama.cpp (#1156)
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-10-22 08:55:44 +02:00
Ettore Di Giacinto
a6fe0a020a
feat(llama.cpp): update (#1200)
**Description**

This PR updates llama.cpp to
465219b914

Supersedes #1195
2023-10-21 18:44:37 +02:00
Ettore Di Giacinto
bf2ed3d752 fix(Dockerfile): piper phonemize is required during build
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-10-21 16:40:41 +02:00
Ettore Di Giacinto
d17a92eef3 example(bruno): add image generation
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-10-21 11:38:23 +02:00
Ettore Di Giacinto
1a7be035d3 fix(Makefile): build all backends if none is specified
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-10-21 11:34:59 +02:00
Ettore Di Giacinto
004baaa30f feat(llama.cpp): update
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-10-21 11:04:03 +02:00
renovate[bot]
ef19268418
chore(deps): update actions/checkout action to v4 (#1006)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-10-21 08:55:44 +02:00
renovate[bot]
e82470341f
fix(deps): update module google.golang.org/grpc to v1.59.0 (#1189)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-10-20 17:04:14 +02:00
renovate[bot]
88fa42de75
fix(deps): update github.com/tmc/langchaingo digest to c636b3d (#1188)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-10-20 17:03:01 +02:00
Ettore Di Giacinto
432513c3ba
ci: add GPU tests (#1095)
* ci: test GPU

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* ci: show logs

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Debug

* debug

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* split extra/core images

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* split extra/core images

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* consider runner host dir

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-10-19 13:50:40 +02:00
renovate[bot]
45370c212b
fix(deps): update github.com/nomic-ai/gpt4all/gpt4all-bindings/golang digest to 9a19c74 (#1179)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-10-17 18:37:27 +02:00
Jesús Espino
e91f660eb1
feat(metrics): Adding initial support for prometheus metrics (#1176)
* feat(metrics): Adding initial support for prometheus metrics

* Fixing CI

* run go mod tidy
2023-10-17 18:22:53 +02:00
renovate[bot]
3f3162e57c
fix(deps): update module github.com/gofiber/fiber/v2 to v2.50.0 (#1177)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-10-16 21:47:44 +02:00
renovate[bot]
208d1fce58
fix(deps): update github.com/tmc/langchaingo digest to a02d4fd (#1175)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-10-16 21:46:53 +02:00
Ettore Di Giacinto
128694213f
feat: llama.cpp gRPC C++ backend (#1170)
* wip: llama.cpp c++ gRPC server

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* make it work, attach it to the build process

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* update deps

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix: add protobuf dep

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* try fix protobuf on cmake

* cmake: workarounds

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* add packages

* cmake: use fixed version of grpc

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* cmake(grpc): install locally

* install grpc

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* install required deps for grpc on debian bullseye

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* debug

* debug

* Fixups

* no need to install cmake manually

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* ci: fixup macOS

* use brew whenever possible

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* macOS fixups

* debug

* fix container build

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* workaround

* try mac

https://stackoverflow.com/questions/23905661/on-mac-g-clang-fails-to-search-usr-local-include-and-usr-local-lib-by-def

* Disable temp. arm64 docker image builds

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-10-16 21:46:29 +02:00
Jesús Espino
8034ed3473
Adding transcript subcommand (#1171)
Adding the transcript subcommand to the localai binary

This PR is related to #816
2023-10-15 09:17:41 +02:00
renovate[bot]
d22069c59e
fix(deps): update github.com/nomic-ai/gpt4all/gpt4all-bindings/golang digest to 22de3c5 (#1172)
[![Mend
Renovate](https://app.renovatebot.com/images/banner.svg)](https://renovatebot.com)

This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[github.com/nomic-ai/gpt4all/gpt4all-bindings/golang](https://togithub.com/nomic-ai/gpt4all)
| require | digest | `10f9b49` -> `22de3c5` |

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR has been generated by [Mend
Renovate](https://www.mend.io/free-developer-tools/renovate/). View
repository job log
[here](https://developer.mend.io/github/go-skynet/LocalAI).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy44LjEiLCJ1cGRhdGVkSW5WZXIiOiIzNy44LjEiLCJ0YXJnZXRCcmFuY2giOiJtYXN0ZXIifQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-10-14 12:29:22 +02:00
renovate[bot]
5a04d32b39
fix(deps): update module github.com/sashabaranov/go-openai to v1.16.0 (#1159)
[![Mend
Renovate](https://app.renovatebot.com/images/banner.svg)](https://renovatebot.com)

This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[github.com/sashabaranov/go-openai](https://togithub.com/sashabaranov/go-openai)
| require | minor | `v1.15.4` -> `v1.16.0` |

---

### Release Notes

<details>
<summary>sashabaranov/go-openai
(github.com/sashabaranov/go-openai)</summary>

###
[`v1.16.0`](https://togithub.com/sashabaranov/go-openai/releases/tag/v1.16.0)

[Compare
Source](https://togithub.com/sashabaranov/go-openai/compare/v1.15.4...v1.16.0)

#### What's Changed

- Add DotProduct Method and README Example for Embedding Similarity
Search by [@&#8203;ealvar3z](https://togithub.com/ealvar3z) in
[https://github.com/sashabaranov/go-openai/pull/492](https://togithub.com/sashabaranov/go-openai/pull/492)
- fix: use any for n_epochs by
[@&#8203;henomis](https://togithub.com/henomis) in
[https://github.com/sashabaranov/go-openai/pull/499](https://togithub.com/sashabaranov/go-openai/pull/499)
- Feat Add headers to openai responses by
[@&#8203;henomis](https://togithub.com/henomis) in
[https://github.com/sashabaranov/go-openai/pull/506](https://togithub.com/sashabaranov/go-openai/pull/506)
- Support get http header and x-ratelimit-\* headers by
[@&#8203;liushuangls](https://togithub.com/liushuangls) in
[https://github.com/sashabaranov/go-openai/pull/507](https://togithub.com/sashabaranov/go-openai/pull/507)

**Full Changelog**:
https://github.com/sashabaranov/go-openai/compare/v1.15.4...v1.16.0

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR has been generated by [Mend
Renovate](https://www.mend.io/free-developer-tools/renovate/). View
repository job log
[here](https://developer.mend.io/github/go-skynet/LocalAI).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy44LjEiLCJ1cGRhdGVkSW5WZXIiOiIzNy44LjEiLCJ0YXJnZXRCcmFuY2giOiJtYXN0ZXIifQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-10-14 12:28:58 +02:00
Jesús Espino
ab65f3a17d
Addining the tts command line subcommand (#1169)
This PR adds the tts (Text to Speach) command to the localai binary.

This PR is related to the issue #816
2023-10-14 12:27:35 +02:00
renovate[bot]
4e23cbebcf
fix(deps): update github.com/nomic-ai/gpt4all/gpt4all-bindings/golang digest to 10f9b49 (#1158)
[![Mend
Renovate](https://app.renovatebot.com/images/banner.svg)](https://renovatebot.com)

This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[github.com/nomic-ai/gpt4all/gpt4all-bindings/golang](https://togithub.com/nomic-ai/gpt4all)
| require | digest | `56c0d28` -> `10f9b49` |

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR has been generated by [Mend
Renovate](https://www.mend.io/free-developer-tools/renovate/). View
repository job log
[here](https://developer.mend.io/github/go-skynet/LocalAI).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy44LjEiLCJ1cGRhdGVkSW5WZXIiOiIzNy44LjEiLCJ0YXJnZXRCcmFuY2giOiJtYXN0ZXIifQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-10-13 18:31:13 +02:00
Ettore Di Giacinto
63418c1afc
ci: cleanup worker (#1166)
**Description**

Tries to make CI green again

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-10-12 18:09:56 +02:00
Jesús Espino
8ca671761a
feat(cli): Adding models subcommand with list and install subcommands (#1165)
Adding subcommands to do certain actions directly from the command line.
I'm starting with the models subcommand allowing you to list models from
your galleries and install them.

This PR partially fixes #816

My intention is to keep adding other subcommands, but I think this is a
good start, and I think this already provides value.

Also, I added a new dependency to generate the progress bar in the
command line, it is not "needed" but I think is a nice to have to have a
cooler interface.

Here is a screenshot:

![imagen](https://github.com/go-skynet/LocalAI/assets/290303/8d8c1bf0-5340-46ce-9362-812694f914cd)
2023-10-12 10:45:34 +02:00
Jesús Espino
81a5ed9f31
fix(openai): Populate ID and Created fields in OpenAI compatible responses (#1164)
Adding the extra ID and Created fields to any request to the OpenAI
Compatible API to improve the compatibility.

This PR fixes #1103
2023-10-12 02:00:08 +00:00
renovate[bot]
528b9d9206
fix(deps): update github.com/go-skynet/go-llama.cpp digest to aeba71e (#1155)
[![Mend
Renovate](https://app.renovatebot.com/images/banner.svg)](https://renovatebot.com)

This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[github.com/go-skynet/go-llama.cpp](https://togithub.com/go-skynet/go-llama.cpp)
| require | digest | `1676dcd` -> `aeba71e` |

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR has been generated by [Mend
Renovate](https://www.mend.io/free-developer-tools/renovate/). View
repository job log
[here](https://developer.mend.io/github/go-skynet/LocalAI).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy44LjEiLCJ1cGRhdGVkSW5WZXIiOiIzNy44LjEiLCJ0YXJnZXRCcmFuY2giOiJtYXN0ZXIifQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2023-10-11 18:19:13 +02:00