Custom Provider

Use your own AI model with DBCode — Ollama, OpenAI, Groq, Together, LM Studio, or any OpenAI-compatible endpoint.

DBCode can use any OpenAI-compatible API as its AI provider for inline completion and execution plan analysis. This lets you use local models (Ollama, LM Studio), cloud APIs (OpenAI, Groq, Together), or any service that exposes the standard /v1/chat/completions endpoint.

Why Use a Custom Provider

  • Privacy: Run models locally so no data leaves your machine.
  • Model choice: Pick the model that best fits your needs — small and fast for completions, large and capable for analysis.
  • Cost control: Use free local models or your own API keys instead of a Copilot subscription or DBCode’s hosted models.
  • Flexibility: Switch models per-request from the AI assistant panel without changing your default.

Setup

1. Configure the Endpoint

Open Settings (Cmd/Ctrl+,) and set:

  • dbcode.ai.customModel.endpoint — The base URL of your API server.

Examples:

ProviderEndpoint
Ollama (local)http://localhost:11434
LM Studio (local)http://localhost:1234
OpenAIhttps://api.openai.com
Groqhttps://api.groq.com/openai
Togetherhttps://api.together.xyz

2. Configure the Model

  • dbcode.ai.customModel.model — The model identifier.

Examples:

ProviderModel
Ollamacodellama:7b-instruct, qwen2.5-coder:7b
OpenAIgpt-4o, gpt-4o-mini
Groqllama-3.3-70b-versatile
Togethermeta-llama/Meta-Llama-3.1-70B-Instruct-Turbo

3. Set an API Key (if required)

Local servers like Ollama and LM Studio typically don’t require authentication. Cloud providers do.

  1. Open the Command Palette (F1 or Cmd/Ctrl+Shift+P)
  2. Run: DBCode: Set Custom Model API Key
  3. Enter your API key

The key is stored securely in VS Code’s SecretStorage (your OS keychain). It is never written to settings files.

If you send a request without a key and the server returns a 401 or 403 error, DBCode will automatically prompt you to enter one.

Settings Reference

SettingDescriptionDefault
dbcode.ai.customModel.endpointOpenAI-compatible API base URL
dbcode.ai.customModel.modelModel name / identifier
dbcode.ai.customModel.timeoutRequest timeout in seconds30
dbcode.ai.customModel.onlyDisable fallback to other providersfalse

Example settings.json:

{
"dbcode.ai.customModel.endpoint": "http://localhost:11434",
"dbcode.ai.customModel.model": "qwen2.5-coder:7b"
}

Provider Hierarchy and Fallback

When a custom provider is configured, DBCode uses it as the primary AI provider. If it fails (server unreachable, model not found, etc.), DBCode offers to fall back through the provider chain:

  1. Custom Model — your configured endpoint
  2. GitHub Copilot — if installed and active
  3. DBCode AI — hosted model, always available

For inline completions, the fallback happens silently with an info notification. For interactive features like execution plan analysis, DBCode shows a confirmation dialog before switching.

To prevent fallback and use only your custom model, enable dbcode.ai.customModel.only.

Choosing a Provider

You can switch providers at any time:

  1. Open the Command Palette (F1 or Cmd/Ctrl+Shift+P)
  2. Run: DBCode: Choose AI Provider
  3. Select Custom Model, Copilot, or DBCode AI

Changing Models

To change the model within your current provider:

  1. Open the Command Palette
  2. Run: DBCode: Change AI Model
  3. Enter the new model name (custom) or select from the list (Copilot)

When changing models from the AI assistant panel during an analysis, the change only applies to that request — your default settings are not modified.

Troubleshooting

”Cannot reach custom model” Error

DBCode probes the endpoint on startup by calling /v1/models (or /api/tags for Ollama). If neither responds:

  • Verify the server is running and accessible at the configured URL
  • Check for firewall or proxy issues
  • Ensure the URL includes the protocol (http:// or https://)

Authentication Errors (401 / 403)

  • Run DBCode: Set Custom Model API Key to set or update your key
  • Verify the key is valid for the configured endpoint
  • If the key was rotated, clear the old one and re-enter it

Slow Responses

  • Increase dbcode.ai.customModel.timeout (default is 30 seconds)
  • For local models, consider a smaller/faster model for inline completions
  • Use a larger model selectively for plan analysis by changing the model from the AI assistant panel

Model Not Found (404)

  • Verify the model name matches exactly what the server expects
  • For Ollama, run ollama list to see available models
  • For cloud APIs, check the provider’s documentation for valid model identifiers