docs: Fix documentation accuracy and add missing Gemini provider

Corrections made:
- docs/architecture.md: Fix crate count from 9 to 8 (actual count)
- docs/tools.md: Fix code_search supported languages (kotlin -> haskell, scheme, racket)
- docs/CODE_SEARCH.md: Add missing Haskell and Scheme to supported languages list
- docs/providers.md: Add complete Gemini provider documentation section
- docs/configuration.md: Add Gemini configuration section

The Gemini provider (crates/g3-providers/src/gemini.rs) was fully implemented
but not documented. The code_search tool actually supports haskell and scheme
(via tree-sitter) but documentation incorrectly listed kotlin.

Agent: lamport
This commit is contained in:
Dhanji R. Prasanna
2026-01-29 12:06:53 +11:00
parent f9e0b94cc1
commit 457ba35f80
5 changed files with 66 additions and 4 deletions

View File

@@ -40,8 +40,9 @@ g3 includes a syntax-aware code search tool powered by tree-sitter. Unlike text-
- Java
- C
- C++
- Racket
- Haskell
- Scheme
- Racket
## Basic Usage

View File

@@ -57,7 +57,7 @@ g3 follows a **tool-first philosophy**: instead of just providing advice, it act
## Workspace Structure
g3 is organized as a Rust workspace with 9 crates:
g3 is organized as a Rust workspace with 8 crates:
```
g3/

View File

@@ -89,6 +89,21 @@ use_oauth = true # Use OAuth (recommended)
- **OAuth** (`use_oauth = true`): Opens browser for authentication, tokens refresh automatically
- **Token** (`token = "..."`, `use_oauth = false`): Uses personal access token directly
### Gemini Configuration
```toml
[providers.gemini.default]
api_key = "your-google-api-key" # Required: Your Google AI API key
model = "gemini-2.0-flash" # Model to use
max_tokens = 8192
temperature = 0.7
```
**Available Gemini models**:
- `gemini-2.0-flash` (recommended)
- `gemini-1.5-pro`
- `gemini-1.5-flash`
### OpenAI Configuration
```toml

View File

@@ -1,6 +1,6 @@
# g3 LLM Providers Guide
**Last updated**: January 2025
**Last updated**: January 2025 (Gemini provider added)
**Source of truth**: `crates/g3-providers/src/`
## Purpose
@@ -13,6 +13,7 @@ This document describes the LLM providers supported by g3, their capabilities, a
|----------|------|--------------|---------------|----------------|----------|
| **Anthropic** | Cloud | Native | Yes | 200k (1M optional) | General use, complex tasks |
| **Databricks** | Cloud | Native | Yes (Claude models) | Varies | Enterprise, existing Databricks users |
| **Gemini** | Cloud | Native | No | 1M-2M | Google ecosystem, large context |
| **OpenAI** | Cloud | Native | No | 128k | GPT model preference |
| **OpenAI-Compatible** | Cloud | Native | No | Varies | OpenRouter, Groq, Together, etc. |
| **Embedded** | Local | JSON fallback | No | 4k-32k | Privacy, offline, cost savings |
@@ -121,6 +122,50 @@ Models depend on your Databricks workspace configuration:
---
## Gemini
**Location**: `crates/g3-providers/src/gemini.rs`
### Features
- **Native tool calling**: Full support for structured tool calls
- **Large context windows**: Up to 2M tokens on some models
- **Streaming**: Real-time response streaming
- **Google ecosystem**: Integrates with Google Cloud
### Configuration
```toml
[providers.gemini.default]
api_key = "your-google-api-key" # Required
model = "gemini-2.0-flash" # Model name
max_tokens = 8192 # Max output tokens
temperature = 0.7 # 0.0-1.0
```
### Available Models
| Model | Context | Notes |
|-------|---------|-------|
| `gemini-2.0-flash` | 1M | Fast, efficient |
| `gemini-1.5-pro` | 2M | Most capable |
| `gemini-1.5-flash` | 1M | Balanced speed/quality |
### Getting an API Key
1. Go to [Google AI Studio](https://aistudio.google.com/)
2. Create or select a project
3. Generate an API key
4. Add to your g3 configuration
### Notes
- Gemini models have very large context windows (1M-2M tokens)
- Good for tasks requiring extensive context
- Native tool calling works well for agentic workflows
---
## OpenAI
**Location**: `crates/g3-providers/src/openai.rs`
@@ -291,6 +336,7 @@ This works but is less reliable than native tool calling.
| Privacy-critical | Embedded |
| Offline development | Embedded |
| Fast iteration | Groq (Llama) |
| Large context needs | Gemini (1M-2M tokens) |
| Model variety | OpenRouter |
### By Priority

View File

@@ -233,7 +233,7 @@ Syntax-aware code search using tree-sitter.
- `max_concurrency` (integer, optional): Parallel searches (default: 4)
- `max_matches_per_search` (integer, optional): Max matches (default: 500)
**Supported languages**: rust, python, javascript, typescript, go, java, c, cpp, kotlin
**Supported languages**: rust, python, javascript, typescript, go, java, c, cpp, haskell, scheme, racket
**Example**:
```json