diff --git a/docs/content/howtos/_index.md b/docs/content/howtos/_index.md index 95eab2bd..86829ba6 100644 --- a/docs/content/howtos/_index.md +++ b/docs/content/howtos/_index.md @@ -8,8 +8,7 @@ weight = 9 This section includes LocalAI end-to-end examples, tutorial and how-tos curated by the community and maintained by [lunamidori5](https://github.com/lunamidori5). -- [Setup LocalAI with Docker on CPU]({{%relref "howtos/easy-setup-docker-cpu" %}}) -- [Setup LocalAI with Docker With CUDA]({{%relref "howtos/easy-setup-docker-gpu" %}}) +- [Setup LocalAI with Docker]({{%relref "howtos/easy-setup-docker" %}}) - [Seting up a Model]({{%relref "howtos/easy-model" %}}) - [Making Text / LLM requests to LocalAI]({{%relref "howtos/easy-request" %}}) - [Making Photo / SD requests to LocalAI]({{%relref "howtos/easy-setup-sd" %}}) diff --git a/docs/content/howtos/easy-setup-docker-cpu.md b/docs/content/howtos/easy-setup-docker-cpu.md deleted file mode 100644 index 6ab882b2..00000000 --- a/docs/content/howtos/easy-setup-docker-cpu.md +++ /dev/null @@ -1,137 +0,0 @@ - -+++ -disableToc = false -title = "Easy Setup - CPU Docker" -weight = 2 -+++ - -{{% notice Note %}} -- You will need about 10gb of RAM Free -- You will need about 15gb of space free on C drive for ``Docker compose`` -{{% /notice %}} - -We are going to run `LocalAI` with `docker compose` for this set up. - -Lets setup our folders for ``LocalAI`` -{{< tabs >}} -{{% tab name="Windows (Batch)" %}} -```batch -mkdir "LocalAI" -cd LocalAI -mkdir "models" -mkdir "images" -``` -{{% /tab %}} - -{{% tab name="Linux (Bash / WSL)" %}} -```bash -mkdir -p "LocalAI" -cd LocalAI -mkdir -p "models" -mkdir -p "images" -``` -{{% /tab %}} -{{< /tabs >}} - -At this point we want to set up our `.env` file, here is a copy for you to use if you wish, Make sure this is in the ``LocalAI`` folder. - -```bash -## Set number of threads. -## Note: prefer the number of physical cores. Overbooking the CPU degrades performance notably. -THREADS=2 - -## Specify a different bind address (defaults to ":8080") -# ADDRESS=127.0.0.1:8080 - -## Define galleries. -## models will to install will be visible in `/models/available` -GALLERIES=[{"name":"model-gallery", "url":"github:go-skynet/model-gallery/index.yaml"}, {"url": "github:go-skynet/model-gallery/huggingface.yaml","name":"huggingface"}] - -## Default path for models -MODELS_PATH=/models - -## Enable debug mode -# DEBUG=true - -## Disables COMPEL (Lets Stable Diffuser work, uncomment if you plan on using it) -# COMPEL=0 - -## Enable/Disable single backend (useful if only one GPU is available) -# SINGLE_ACTIVE_BACKEND=true - -## Specify a build type. Available: cublas, openblas, clblas. -BUILD_TYPE=cublas - -## Uncomment and set to true to enable rebuilding from source -# REBUILD=true - -## Enable go tags, available: stablediffusion, tts -## stablediffusion: image generation with stablediffusion -## tts: enables text-to-speech with go-piper -## (requires REBUILD=true) -# -#GO_TAGS=tts - -## Path where to store generated images -# IMAGE_PATH=/tmp - -## Specify a default upload limit in MB (whisper) -# UPLOAD_LIMIT - -# HUGGINGFACEHUB_API_TOKEN=Token here -``` - - -Now that we have the `.env` set lets set up our `docker-compose` file. -It will use a container from [quay.io](https://quay.io/repository/go-skynet/local-ai?tab=tags). -Also note this `docker-compose` file is for `CPU` only. - -```docker -version: '3.6' - -services: - api: - image: quay.io/go-skynet/local-ai:v2.0.0 - tty: true # enable colorized logs - restart: always # should this be on-failure ? - ports: - - 8080:8080 - env_file: - - .env - volumes: - - ./models:/models - - ./images/:/tmp/generated/images/ - command: ["/usr/bin/local-ai" ] -``` - - -Make sure to save that in the root of the `LocalAI` folder. Then lets spin up the Docker run this in a `CMD` or `BASH` - -```bash -docker compose up -d --pull always -``` - - -Now we are going to let that set up, once it is done, lets check to make sure our huggingface / localai galleries are working (wait until you see this screen to do this) - -You should see: -``` -┌───────────────────────────────────────────────────┐ -│ Fiber v2.42.0 │ -│ http://127.0.0.1:8080 │ -│ (bound on host 0.0.0.0 and port 8080) │ -│ │ -│ Handlers ............. 1 Processes ........... 1 │ -│ Prefork ....... Disabled PID ................. 1 │ -└───────────────────────────────────────────────────┘ -``` - -```bash -curl http://localhost:8080/models/available -``` - -Output will look like this: - -![](https://cdn.discordapp.com/attachments/1116933141895053322/1134037542845566976/image.png) - -Now that we got that setup, lets go setup a [model]({{%relref "easy-model" %}}) diff --git a/docs/content/howtos/easy-setup-docker-gpu.md b/docs/content/howtos/easy-setup-docker.md similarity index 82% rename from docs/content/howtos/easy-setup-docker-gpu.md rename to docs/content/howtos/easy-setup-docker.md index 274e9da0..f1c40e5b 100644 --- a/docs/content/howtos/easy-setup-docker-gpu.md +++ b/docs/content/howtos/easy-setup-docker.md @@ -1,7 +1,7 @@ +++ disableToc = false -title = "Easy Setup - GPU Docker" +title = "Easy Setup - Docker" weight = 2 +++ @@ -12,26 +12,13 @@ weight = 2 We are going to run `LocalAI` with `docker compose` for this set up. -Lets Setup our folders for ``LocalAI`` -{{< tabs >}} -{{% tab name="Windows (Batch)" %}} +Lets setup our folders for ``LocalAI`` (run these to make the folders for you if you wish) ```batch mkdir "LocalAI" cd LocalAI mkdir "models" mkdir "images" ``` -{{% /tab %}} - -{{% tab name="Linux (Bash / WSL)" %}} -```bash -mkdir -p "LocalAI" -cd LocalAI -mkdir -p "models" -mkdir -p "images" -``` -{{% /tab %}} -{{< /tabs >}} At this point we want to set up our `.env` file, here is a copy for you to use if you wish, Make sure this is in the ``LocalAI`` folder. @@ -51,7 +38,7 @@ GALLERIES=[{"name":"model-gallery", "url":"github:go-skynet/model-gallery/index. MODELS_PATH=/models ## Enable debug mode -# DEBUG=true +DEBUG=true ## Disables COMPEL (Lets Stable Diffuser work, uncomment if you plan on using it) # COMPEL=0 @@ -84,6 +71,32 @@ BUILD_TYPE=cublas Now that we have the `.env` set lets set up our `docker-compose` file. It will use a container from [quay.io](https://quay.io/repository/go-skynet/local-ai?tab=tags). + + +{{< tabs >}} +{{% tab name="CPU Only" %}} +Also note this `docker-compose` file is for `CPU` only. + +```docker +version: '3.6' + +services: + api: + image: quay.io/go-skynet/local-ai:{{< version >}} + tty: true # enable colorized logs + restart: always # should this be on-failure ? + ports: + - 8080:8080 + env_file: + - .env + volumes: + - ./models:/models + - ./images/:/tmp/generated/images/ + command: ["/usr/bin/local-ai" ] +``` +{{% /tab %}} + +{{% tab name="GPU and CPU" %}} Also note this `docker-compose` file is for `CUDA` only. Please change the image to what you need. @@ -91,10 +104,10 @@ Please change the image to what you need. {{% tab name="GPU Images CUDA 11" %}} - `master-cublas-cuda11` - `master-cublas-cuda11-core` -- `v2.0.0-cublas-cuda11` -- `v2.0.0-cublas-cuda11-core` -- `v2.0.0-cublas-cuda11-ffmpeg` -- `v2.0.0-cublas-cuda11-ffmpeg-core` +- `{{< version >}}-cublas-cuda11` +- `{{< version >}}-cublas-cuda11-core` +- `{{< version >}}-cublas-cuda11-ffmpeg` +- `{{< version >}}-cublas-cuda11-ffmpeg-core` Core Images - Smaller images without predownload python dependencies {{% /tab %}} @@ -102,10 +115,10 @@ Core Images - Smaller images without predownload python dependencies {{% tab name="GPU Images CUDA 12" %}} - `master-cublas-cuda12` - `master-cublas-cuda12-core` -- `v2.0.0-cublas-cuda12` -- `v2.0.0-cublas-cuda12-core` -- `v2.0.0-cublas-cuda12-ffmpeg` -- `v2.0.0-cublas-cuda12-ffmpeg-core` +- `{{< version >}}-cublas-cuda12` +- `{{< version >}}-cublas-cuda12-core` +- `{{< version >}}-cublas-cuda12-ffmpeg` +- `{{< version >}}-cublas-cuda12-ffmpeg-core` Core Images - Smaller images without predownload python dependencies {{% /tab %}} @@ -135,6 +148,8 @@ services: - ./images/:/tmp/generated/images/ command: ["/usr/bin/local-ai" ] ``` +{{% /tab %}} +{{< /tabs >}} Make sure to save that in the root of the `LocalAI` folder. Then lets spin up the Docker run this in a `CMD` or `BASH` diff --git a/docs/content/howtos/easy-setup-embeddings.md b/docs/content/howtos/easy-setup-embeddings.md index 8c3fc2b7..4e223781 100644 --- a/docs/content/howtos/easy-setup-embeddings.md +++ b/docs/content/howtos/easy-setup-embeddings.md @@ -12,17 +12,6 @@ curl http://localhost:8080/models/apply -H "Content-Type: application/json" -d ' }' ``` -Now we need to make a ``bert.yaml`` in the models folder -```yaml -backend: bert-embeddings -embeddings: true -name: text-embedding-ada-002 -parameters: - model: bert -``` - -**Restart LocalAI after you change a yaml file** - When you would like to request the model from CLI you can do ```bash @@ -30,7 +19,7 @@ curl http://localhost:8080/v1/embeddings \ -H "Content-Type: application/json" \ -d '{ "input": "The food was delicious and the waiter...", - "model": "text-embedding-ada-002" + "model": "bert-embeddings" }' ``` diff --git a/docs/content/howtos/easy-setup-sd.md b/docs/content/howtos/easy-setup-sd.md index 4dd83505..e47a84d1 100644 --- a/docs/content/howtos/easy-setup-sd.md +++ b/docs/content/howtos/easy-setup-sd.md @@ -5,7 +5,7 @@ weight = 2 +++ To set up a Stable Diffusion model is super easy. -In your models folder make a file called ``stablediffusion.yaml``, then edit that file with the following. (You can change ``Linaqruf/animagine-xl`` with what ever ``sd-lx`` model you would like. +In your ``models`` folder make a file called ``stablediffusion.yaml``, then edit that file with the following. (You can change ``Linaqruf/animagine-xl`` with what ever ``sd-lx`` model you would like. ```yaml name: animagine-xl parameters: @@ -21,8 +21,7 @@ diffusers: If you are using docker, you will need to run in the localai folder with the ``docker-compose.yaml`` file in it ```bash -docker-compose down #windows -docker compose down #linux/mac +docker compose down ``` Then in your ``.env`` file uncomment this line. @@ -32,14 +31,13 @@ COMPEL=0 After that we can reinstall the LocalAI docker VM by running in the localai folder with the ``docker-compose.yaml`` file in it ```bash -docker-compose up #windows -docker compose up #linux/mac +docker compose up -d ``` Then to download and setup the model, Just send in a normal ``OpenAI`` request! LocalAI will do the rest! ```bash curl http://localhost:8080/v1/images/generations -H "Content-Type: application/json" -d '{ "prompt": "Two Boxes, 1blue, 1red", - "size": "256x256" + "size": "1024x1024" }' ```