10 KiB
G3 Configuration Guide
Last updated: January 2025
Source of truth: crates/g3-config/src/lib.rs, config.example.toml
Purpose
This document explains how to configure G3, including provider setup, agent behavior, and optional features like WebDriver and computer control.
Configuration File Location
G3 looks for configuration files in this order:
- Path specified via
--configCLI argument ./g3.toml(current directory)~/.config/g3/config.toml(user config)~/.g3.toml(legacy location)
If no configuration file exists, G3 creates a default one at ~/.config/g3/config.toml on first run.
Configuration Format
G3 uses TOML format. The configuration is organized into sections:
[providers] # LLM provider settings
[agent] # Agent behavior settings
[computer_control] # Mouse/keyboard automation
[webdriver] # Browser automation
[macax] # macOS Accessibility API
Provider Configuration
Provider Reference Format
Providers are referenced using the format: <provider_type>.<config_name>
Examples:
anthropic.defaultdatabricks.productionopenai.gpt4embedded.local
Basic Provider Setup
[providers]
# Default provider used for all operations
default_provider = "anthropic.default"
# Optional: Different providers for different roles
# planner = "anthropic.planner" # Planning mode
# coach = "anthropic.default" # Code reviewer in autonomous mode
# player = "anthropic.default" # Code implementer in autonomous mode
Anthropic Configuration
[providers.anthropic.default]
api_key = "sk-ant-..." # Required: Your Anthropic API key
model = "claude-sonnet-4-5" # Model to use
max_tokens = 64000 # Max output tokens per request
temperature = 0.3 # Sampling temperature (0.0-1.0)
# cache_config = "ephemeral" # Optional: Enable prompt caching
# enable_1m_context = true # Optional: Enable 1M context (extra cost)
# thinking_budget_tokens = 10000 # Optional: Extended thinking mode
Available Anthropic models:
claude-sonnet-4-5(recommended)claude-opus-4-5claude-3-5-sonnet-20241022claude-3-opus-20240229
Databricks Configuration
[providers.databricks.default]
host = "https://your-workspace.cloud.databricks.com" # Required
model = "databricks-claude-sonnet-4" # Model endpoint
max_tokens = 4096
temperature = 0.1
use_oauth = true # Use OAuth (recommended)
# token = "dapi..." # Or use personal access token
OAuth vs Token Authentication:
- OAuth (
use_oauth = true): Opens browser for authentication, tokens refresh automatically - Token (
token = "...",use_oauth = false): Uses personal access token directly
OpenAI Configuration
[providers.openai.default]
api_key = "sk-..." # Required: Your OpenAI API key
model = "gpt-4-turbo" # Model to use
max_tokens = 4096
temperature = 0.1
# base_url = "https://api.openai.com/v1" # Optional: Custom endpoint
OpenAI-Compatible Providers
For services with OpenAI-compatible APIs (OpenRouter, Groq, Together, etc.):
[providers.openai_compatible.openrouter]
api_key = "sk-or-..." # Provider's API key
model = "anthropic/claude-3.5-sonnet"
base_url = "https://openrouter.ai/api/v1"
max_tokens = 4096
temperature = 0.1
[providers.openai_compatible.groq]
api_key = "gsk_..."
model = "llama-3.3-70b-versatile"
base_url = "https://api.groq.com/openai/v1"
max_tokens = 4096
temperature = 0.1
Reference these as openrouter.default or groq.default in default_provider.
Embedded (Local) Models
[providers.embedded.default]
model_path = "~/.cache/g3/models/qwen2.5-7b-instruct-q3_k_m.gguf"
model_type = "qwen" # Model architecture
context_length = 32768 # Context window size
max_tokens = 2048 # Max output tokens
temperature = 0.1
gpu_layers = 32 # Layers to offload to GPU (Metal/CUDA)
threads = 8 # CPU threads for inference
Supported model types: qwen, codellama, llama, mistral
Hardware requirements:
- 4-16GB RAM depending on model size
- Optional GPU acceleration (Metal on macOS, CUDA on Linux)
Agent Configuration
[agent]
# Context and token settings
fallback_default_max_tokens = 8192 # Default max tokens if provider doesn't specify
# max_context_length = 200000 # Override context window size for all providers
# Behavior settings
enable_streaming = true # Stream responses in real-time
allow_multiple_tool_calls = true # Allow multiple tools per response
timeout_seconds = 60 # Request timeout
auto_compact = true # Auto-compact context at 90%
# Retry settings
max_retry_attempts = 3 # Retries for interactive mode
autonomous_max_retry_attempts = 6 # Retries for autonomous mode
# TODO management
check_todo_staleness = true # Warn about stale TODO items
Retry Behavior
G3 automatically retries on recoverable errors:
- Rate limits (HTTP 429)
- Network errors
- Server errors (HTTP 5xx)
- Timeouts
Interactive mode uses max_retry_attempts (default: 3)
Autonomous mode uses autonomous_max_retry_attempts (default: 6) with longer delays
Computer Control Configuration
[computer_control]
enabled = false # Set to true to enable
require_confirmation = true # Require confirmation before actions
max_actions_per_second = 5 # Rate limit for safety
Required OS permissions:
- macOS: System Preferences → Security & Privacy → Accessibility
- Linux: X11 or Wayland access
- Windows: Run as administrator (first time)
WebDriver Configuration
[webdriver]
enabled = false # Set to true to enable
browser = "safari" # "safari" or "chrome-headless"
safari_port = 4444 # Safari WebDriver port
chrome_port = 9515 # ChromeDriver port
# chrome_binary = "/path/to/chrome" # Optional: Custom Chrome path
Safari Setup (macOS)
# Enable Safari remote automation (one-time setup)
safaridriver --enable
# Or via Safari UI:
# Safari → Preferences → Advanced → Show Develop menu
# Develop → Allow Remote Automation
Chrome Setup
Option 1: Chrome for Testing (Recommended)
./scripts/setup-chrome-for-testing.sh
Then configure:
[webdriver]
chrome_binary = "/Users/yourname/.chrome-for-testing/chrome-mac-arm64/Google Chrome for Testing.app/Contents/MacOS/Google Chrome for Testing"
Option 2: System Chrome
# macOS
brew install chromedriver
# Linux
apt install chromium-chromedriver
macOS Accessibility API Configuration
[macax]
enabled = false # Set to true to enable
Required permissions: System Preferences → Security & Privacy → Privacy → Accessibility → Add your terminal app
See macOS Accessibility Tools Guide for detailed usage.
Multi-Role Configuration
For autonomous mode with different models for coach and player:
[providers]
default_provider = "anthropic.default"
coach = "anthropic.coach" # Code reviewer
player = "anthropic.player" # Code implementer
[providers.anthropic.coach]
api_key = "sk-ant-..."
model = "claude-sonnet-4-5"
max_tokens = 32000
temperature = 0.1 # Lower for consistent reviews
[providers.anthropic.player]
api_key = "sk-ant-..."
model = "claude-sonnet-4-5"
max_tokens = 64000
temperature = 0.3 # Higher for creative implementations
See config.coach-player.example.toml for a complete example.
Environment Variables
Environment variables override configuration file settings:
| Variable | Description |
|---|---|
G3_WORKSPACE_PATH |
Override workspace directory |
ANTHROPIC_API_KEY |
Anthropic API key |
OPENAI_API_KEY |
OpenAI API key |
DATABRICKS_HOST |
Databricks workspace URL |
DATABRICKS_TOKEN |
Databricks personal access token |
CLI Overrides
CLI arguments have the highest priority:
# Override provider
g3 --provider anthropic.default
# Override model
g3 --model claude-opus-4-5
# Enable features
g3 --webdriver # Enable WebDriver (Safari)
g3 --chrome-headless # Enable WebDriver (Chrome headless)
g3 --macax # Enable macOS Accessibility API
# Specify config file
g3 --config /path/to/config.toml
Complete Example Configuration
# ~/.config/g3/config.toml
[providers]
default_provider = "anthropic.default"
[providers.anthropic.default]
api_key = "sk-ant-api03-..."
model = "claude-sonnet-4-5"
max_tokens = 64000
temperature = 0.3
[providers.databricks.work]
host = "https://mycompany.cloud.databricks.com"
model = "databricks-claude-sonnet-4"
max_tokens = 4096
temperature = 0.1
use_oauth = true
[agent]
fallback_default_max_tokens = 8192
enable_streaming = true
allow_multiple_tool_calls = true
timeout_seconds = 60
max_retry_attempts = 3
autonomous_max_retry_attempts = 6
[computer_control]
enabled = false
require_confirmation = true
max_actions_per_second = 5
[webdriver]
enabled = true
browser = "safari"
safari_port = 4444
[macax]
enabled = false
Troubleshooting
"Old config format" error
If you see this error, your config uses a deprecated format. Update to the new named provider format:
Old format (deprecated):
[providers.anthropic]
api_key = "..."
New format:
[providers.anthropic.default]
api_key = "..."
Provider not found
Ensure your default_provider matches a configured provider:
default_provider = "anthropic.default" # Must match [providers.anthropic.default]
OAuth issues
For Databricks OAuth:
- Ensure
use_oauth = true - Remove any
tokensetting - A browser window will open for authentication
- Tokens are cached in
~/.databricks/oauth-tokens.json
Context window errors
If you see context overflow errors:
- Check
max_context_lengthin[agent] - Use
/compactcommand to manually summarize - Use
/thinnifyto replace large tool results with file references