.. | ||
img | ||
config.py | ||
docker-compose.yml | ||
README.md |
Continue
This document presents an example of integration with continuedev/continue.
For a live demonstration, please click on the link below:
Integration Setup Walkthrough
-
As outlined in
continue
's documentation, install the Visual Studio Code extension from the marketplace and open it. -
In this example, LocalAI will download the gpt4all model and set it up as "gpt-3.5-turbo". Refer to the
docker-compose.yaml
file for details.# Clone LocalAI git clone https://github.com/go-skynet/LocalAI cd LocalAI/examples/continue # Start with docker-compose docker-compose up --build -d
-
Type
/config
within Continue's VSCode extension, or edit the file located at~/.continue/config.py
on your system with the following configuration:from continuedev.src.continuedev.libs.llm.openai import OpenAI, OpenAIServerInfo config = ContinueConfig( ... models=Models( default=OpenAI( api_key="my-api-key", model="gpt-3.5-turbo", openai_server_info=OpenAIServerInfo( api_base="http://localhost:8080", model="gpt-3.5-turbo" ) ) ), )
This setup enables you to make queries directly to your model running in the Docker container. Note that the api_key
does not need to be properly set up; it is included here as a placeholder.
If editing the configuration seems confusing, you may copy and paste the provided default config.py
file over the existing one in ~/.continue/config.py
after initializing the extension in the VSCode IDE.