Commit Graph

1707 Commits

Author SHA1 Message Date
Ettore Di Giacinto
e0187c2a1a ci: do not tag latest on AIO automatically
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-24 09:41:13 +02:00
Ettore Di Giacinto
b76d2fe68a
Update quickstart.md
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-05-24 09:02:59 +02:00
Ettore Di Giacinto
ee4f722bf8
models(gallery): add aya-35b (#2391)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-23 23:51:34 +02:00
LocalAI [bot]
dce63237f2
⬆️ Update ggerganov/llama.cpp (#2360)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-23 21:02:13 +00:00
Dave
0b637465d9
refactor: Minor improvements to BackendConfigLoader (#2353)
some minor renames and refactorings within BackendConfigLoader - make things more consistent, remove underused code, rename things for clarity

Signed-off-by: Dave Lee <dave@gray101.com>
2024-05-23 22:48:12 +02:00
Mauro Morales
114f549f5e
Add warning for running the binary on MacOS (#2389) 2024-05-23 22:40:55 +02:00
Ettore Di Giacinto
ea330d452d
models(gallery): add mistral-0.3 and command-r, update functions (#2388)
* models(gallery): add mistral-0.3 and command-r, update functions

Add also disable_parallel_new_lines to disable newlines in the JSON
output when forcing parallel tools. Some models (like mistral) might be
very sensible to that when being used for function calling.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* models(gallery): add aya-23-8b

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-23 19:16:08 +02:00
Valentin Fröhlich
eb11a46a73
Add Home Assistant Integration (#2387)
Add https://github.com/valentinfrlch/ha-gpt4vision to Home Assistant Integration section

gpt4vision uses LocalAI's API to send images along with a prompt and return the models output.

Signed-off-by: Valentin Fröhlich <85313672+valentinfrlch@users.noreply.github.com>
2024-05-23 15:21:01 +02:00
LocalAI [bot]
b57e14d65c
models(gallery): ⬆️ update checksum (#2386)
⬆️ Checksum updates in gallery/index.yaml

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-23 08:42:45 +02:00
Sertaç Özercan
7efa8e75d4
fix: stablediffusion binary (#2385)
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
2024-05-23 08:34:37 +02:00
Ettore Di Giacinto
7551369abe
Update checksum_checker.sh
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-05-23 08:33:58 +02:00
LocalAI [bot]
79915bcd11
models(gallery): ⬆️ update checksum (#2383)
⬆️ Checksum updates in gallery/index.yaml

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-23 01:10:15 +00:00
LocalAI [bot]
c8d7d14a37
⬆️ Update go-skynet/go-bert.cpp (#1225)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-22 23:42:38 +00:00
LocalAI [bot]
c56bc0de98
⬆️ Update ggerganov/whisper.cpp (#2361)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-23 01:02:57 +02:00
Ettore Di Giacinto
3a9408363b
deps(llama.cpp): update and adapt API changes (#2381)
deps(llama.cpp): update and rename function

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-23 01:02:11 +02:00
Ettore Di Giacinto
21a12c2cdd
ci(checksum_checker): do get sha from hf API when available (#2380)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-22 23:51:02 +02:00
Ettore Di Giacinto
371d0cc1f7
ci: generate specific image for intel builds (#2374)
ci: fix intel images until are fixed upstream

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-22 23:35:39 +02:00
Ettore Di Giacinto
23fa92bec0
models(gallery): add hercules and helpingAI (#2376)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-22 22:42:41 +02:00
Ettore Di Giacinto
f91e4e5c03
ci: correctly build p2p in GO_TAGS (#2369)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-22 10:15:36 +02:00
Ettore Di Giacinto
6cbe6a4f99
models(gallery): add phi-3-medium-4k-instruct (#2367)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-22 08:32:30 +02:00
Ettore Di Giacinto
491e1d752b
feat(functions): relax mixedgrammars (#2365)
* feat(functions): relax mixedgrammars

Extend even more the functionalities and when mixed mode is enabled,
tolerate also both strings and JSON in the result - in this case we make
sure that the JSON can be correctly parsed.

This also updates the examples and the gallery model to configure the
grammar.

The changeset also breaks current function/grammar configuration as it
reserves now a stanza in the YAML config.

For example:

```yaml
function:
  grammar:
    # This allows the grammar to also return messages
    mixed_mode: true
    # Suffix to add to the grammar
    # prefix: '<tool_call>\n'
    # Force parallel calls in the grammar
    # parallel_calls: true
```

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* refactor, add a way to disable mixed json and freestring

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fix linting issues

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-22 00:14:16 +02:00
nold
1542c58466
fix(gallery): checksum Meta-Llama-3-70B-Instruct.Q4_K_M.gguf - #2364 (#2366)
Signed-off-by: Gerrit Pannek <nold@gnu.one>
2024-05-21 21:51:48 +02:00
Ettore Di Giacinto
1a3dedece0
dependencies(grpcio): bump to fix CI issues (#2362)
feat(grpcio): bump to fix CI issues

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-21 14:33:47 +02:00
Ettore Di Giacinto
a58ff00ab1
models(gallery): add stheno (#2358)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-20 19:18:14 +02:00
Ettore Di Giacinto
fdb45153fe
feat(llama.cpp): Totally decentralized, private, distributed, p2p inference (#2343)
* feat(llama.cpp): Enable decentralized, distributed inference

As https://github.com/mudler/LocalAI/pull/2324 introduced distributed inferencing thanks to
@rgerganov implementation in https://github.com/ggerganov/llama.cpp/pull/6829 in upstream llama.cpp, now
it is possible to distribute the workload to remote llama.cpp gRPC server.

This changeset now uses mudler/edgevpn to establish a secure, distributed network between the nodes using a shared token.
The token is generated automatically when starting the server with the `--p2p` flag, and can be used by starting the workers
with `local-ai worker p2p-llama-cpp-rpc` by passing the token via environment variable (TOKEN) or with args (--token).

As per how mudler/edgevpn works, a network is established between the server and the workers with dht and mdns discovery protocols,
the llama.cpp rpc server is automatically started and exposed to the underlying p2p network so the API server can connect on.

When the HTTP server is started, it will discover the workers in the network and automatically create the port-forwards to the service locally.
Then llama.cpp is configured to use the services.

This feature is behind the "p2p" GO_FLAGS

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* go mod tidy

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* ci: add p2p tag

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* better message

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-20 19:17:59 +02:00
Ettore Di Giacinto
16474bfb40
build: add sha (#2356)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-20 18:02:19 +02:00
Ettore Di Giacinto
5a6d120a56
feat(functions): don't use yaml.MapSlice (#2354)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-20 08:31:06 +02:00
Ettore Di Giacinto
7a480bb16f
models(gallery): add LocalAI-Llama3-8b-Function-Call-v0.2-GGUF (#2355)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-20 00:59:17 +02:00
LocalAI [bot]
053531e434
⬆️ Update ggerganov/whisper.cpp (#2352)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-19 22:23:02 +00:00
LocalAI [bot]
b7ab4f25d9
⬆️ Update ggerganov/llama.cpp (#2351)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-19 22:22:03 +00:00
Ettore Di Giacinto
73566a2bb2
feat(functions): allow to use JSONRegexMatch unconditionally (#2349)
* feat(functions): allow to use JSONRegexMatch unconditionally

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* feat(functions): make json_regex_match a list

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-19 18:24:49 +02:00
Ettore Di Giacinto
8ccd5ab040
feat(webui): statically embed js/css assets (#2348)
* feat(webui): statically embed js/css assets

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* update font assets

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-19 18:24:27 +02:00
Ettore Di Giacinto
5a3db730b9
Update README.md
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-05-19 16:37:10 +02:00
Ettore Di Giacinto
8ad669339e
add openvoice backend (#2334)
Wip openvoice
2024-05-19 16:27:08 +02:00
Ettore Di Giacinto
a10a952085
models(gallery): update poppy porpoise mmproj (#2346)
models(gallery): update poppy porpose mmproj

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-19 13:26:02 +02:00
Ettore Di Giacinto
b37447cac5
models(gallery): add master-yi (#2345)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-19 13:25:29 +02:00
Ettore Di Giacinto
f2d182a2eb
models(gallery): add anita (#2344)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-19 13:25:16 +02:00
lenaxia
6b6c8cdd5f
feat(functions): Enable true regex replacement for the regexReplacement option (#2341)
* Adding regex capabilities to ParseFunctionCall replacement

Signed-off-by: Lenaxia <github@47north.lat>

* Adding tests for the regex replace in ParseFunctionCall

Signed-off-by: Lenaxia <github@47north.lat>

* Fixing tests and adding a test case to validate double quote replacement works

Signed-off-by: Lenaxia <github@47north.lat>

* Make Regex replacement stable, drop lookaheads

Signed-off-by: mudler <mudler@localai.io>

---------

Signed-off-by: Lenaxia <github@47north.lat>
Signed-off-by: mudler <mudler@localai.io>
Co-authored-by: Lenaxia <github@47north.lat>
Co-authored-by: mudler <mudler@localai.io>
2024-05-19 01:29:10 +02:00
LocalAI [bot]
5f35e85e86
⬆️ Update ggerganov/llama.cpp (#2342)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-18 21:06:29 +00:00
Ettore Di Giacinto
02f1b477df
feat(functions): simplify parsing, read functions as list (#2340)
Signed-off-by: mudler <mudler@localai.io>
2024-05-18 09:35:28 +02:00
LocalAI [bot]
9ab8f8f5e0
⬆️ Update ggerganov/llama.cpp (#2339)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-17 21:13:01 +00:00
LocalAI [bot]
9a255d6453
⬆️ Update ggerganov/llama.cpp (#2337)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-16 21:53:19 +00:00
Ettore Di Giacinto
e0ef9e2bb9
models(gallery): add yi 6/9b, sqlcoder, sfr-iterative-dpo (#2335)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-05-16 20:05:20 +02:00
cryptk
86627b27f7
fix: add setuptools to all requirements-intel.txt files for python backends (#2333)
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
2024-05-16 19:15:46 +02:00
LocalAI [bot]
4e92569d45
⬆️ Update ggerganov/whisper.cpp (#2329)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-15 22:24:06 +00:00
Ettore Di Giacinto
f7508e3888
models(gallery): add hermes-2-theta-llama-3-8b (#2331)
Signed-off-by: mudler <mudler@localai.io>
2024-05-16 00:22:32 +02:00
Aleksandr Oleinikov
badfc16df1
fix(gallery) Correct llama3-8b-instruct model file (#2330)
Correct llama3-8b-instruct model file

This must be a mistake because the config tries to use a model file that is different from the one actually being downloaded.
I assumed the downloaded file is what should be used so I corrected the specified model file to that

Signed-off-by: Aleksandr Oleinikov <10602045+tannisroot@users.noreply.github.com>
2024-05-16 00:22:05 +02:00
LocalAI [bot]
b584dcf18a
⬆️ Update ggerganov/llama.cpp (#2316)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2024-05-15 22:20:37 +00:00
Ettore Di Giacinto
4c845fb47d
Update README.md
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-05-15 23:56:52 +02:00
Ettore Di Giacinto
07c0559d06
Update README.md
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-05-15 23:56:22 +02:00