# G3 - AI Coding Agent G3 is a coding AI agent designed to help you complete tasks by writing code and executing commands. Built in Rust, it provides a flexible architecture for interacting with various Large Language Model (LLM) providers while offering powerful code generation and task automation capabilities. ## Key Features - **Multiple LLM Providers**: Anthropic (Claude), Databricks, OpenAI, and local models via llama.cpp - **Autonomous Mode**: Coach-player feedback loop for complex tasks - **Intelligent Context Management**: Auto-summarization and context thinning at 50-80% thresholds - **Rich Tool Ecosystem**: File operations, shell commands, computer control, browser automation - **Streaming Responses**: Real-time output with tool call detection - **Error Recovery**: Automatic retry logic with exponential backoff ## Getting Started ```bash # Build the project cargo build --release # Execute a single task g3 "implement a function to calculate fibonacci numbers" # Start autonomous mode with interactive requirements g3 --autonomous --interactive-requirements ``` ## Configuration Create `~/.config/g3/config.toml`: ```toml [providers] default_provider = "databricks" [providers.anthropic] api_key = "sk-ant-..." model = "claude-3-5-sonnet-20241022" max_tokens = 4096 [providers.databricks] host = "https://your-workspace.cloud.databricks.com" model = "databricks-meta-llama-3-1-70b-instruct" max_tokens = 4096 use_oauth = true [agent] max_context_length = 8192 enable_streaming = true # Optional: Use different models for coach and player in autonomous mode [autonomous] coach_provider = "anthropic" coach_model = "claude-3-5-sonnet-20241022" # Thorough review player_provider = "databricks" player_model = "databricks-meta-llama-3-1-70b-instruct" # Fast execution ``` ## Autonomous Mode (Coach-Player Loop) G3 features an autonomous mode where two agents collaborate: - **Player Agent**: Executes tasks and implements solutions - **Coach Agent**: Reviews work and provides feedback ### Option 1: Interactive Requirements with AI Enhancement (Recommended) ```bash g3 --autonomous --interactive-requirements ``` **How it works:** 1. Describe what you want to build (can be brief) 2. Press **Ctrl+D** (Unix/Mac) or **Ctrl+Z** (Windows) 3. AI enhances your input into a structured requirements document 4. Review the enhanced requirements 5. Choose to proceed, edit manually, or cancel 6. If accepted, autonomous mode starts automatically **Example:** ``` You type: "build a todo app with cli in python" AI generates: # Todo List CLI Application ## Overview A command-line todo list application built in Python... ## Functional Requirements 1. Add tasks with descriptions 2. Mark tasks as complete 3. Delete tasks ... ``` ### Option 2: Direct Requirements ```bash g3 --autonomous --requirements "Build a REST API with CRUD operations for user management" ``` ### Option 3: Requirements File Create `requirements.md` in your workspace: ```markdown # Project Requirements 1. Create a REST API with user endpoints 2. Use SQLite for storage 3. Include input validation 4. Write unit tests ``` Then run: ```bash g3 --autonomous ``` ### Why Different Models for Coach and Player? Configure different models in the `[autonomous]` section to: - **Optimize Cost**: Use cheaper model for execution, expensive for review - **Optimize Speed**: Use fast model for iteration, thorough for validation - **Specialize**: Leverage provider strengths (e.g., Claude for analysis, Llama for code) If not configured, both agents use the `default_provider` and its model. ## Command-Line Options ```bash # Autonomous mode g3 --autonomous --interactive-requirements g3 --autonomous --requirements "Your requirements" g3 --autonomous --max-turns 10 # Single-shot mode g3 "your task here" # Options --workspace