Continue.Dev is an AI Code Editing tool built on top of VSCode

Setup

Continue.dev can be used in a fully offline way with local model support via ollama. It supports a yaml configuration file structure. Here is an example of how it can be configured with a combination of Ollama on the current machine and API calls to a LiteLLM proxy server:

 
name: my-configuration
version: 0.0.1
scheme: v1
models:
  - name: qwen2.5-coder 1.5b
    provider: ollama
    model: qwen2.5-coder:1.5b
    roles:
      - chat
      - edit
      - apply
      - autocomplete
  - name: Autodetect
    provider: ollama
    model: AUTODETECT
 
- name: Autodetect
  provider: openai
  model: AUTODETECT
  apiBase: https://litellm.yoursite.example/v1/
  apiKey: sk-some-api-key

Agent Mode and Tool Support

Currently Continue only supports agent mode for certain models that support tool use. How do they detect whether a model supports tool usage? They have a hard coded pattern matching routine in their source code

There is an issue about this open here. I proposed a new ‘role’ that would allow people to override the tool detection.