Compare commits

..

4 Commits

Author SHA1 Message Date
Michael Neale
8d8ddbe4b9 live reloading of detected things 2025-11-14 16:31:46 +11:00
Michael Neale
0466405d87 don't detect console, better process pickup 2025-11-13 18:46:55 +11:00
Dhanji R. Prasanna
39efa24c55 Merge pull request #21 from dhanji/openai-compatible
allow openai to be used to name named compatible providers
2025-11-11 08:42:28 +11:00
Michael Neale
81cd956c20 allow openai to be used to name named compatible providers 2025-11-10 16:12:33 +11:00
14 changed files with 381 additions and 519 deletions

View File

@@ -1,171 +0,0 @@
# Changelog: Requirements Persistence Feature
## Summary
Enhanced the accumulative autonomous mode (`--auto` / default mode) to automatically persist requirements to a local `.g3/requirements.md` file.
## Changes Made
### 1. Core Implementation (`crates/g3-cli/src/lib.rs`)
#### New Functions Added:
- **`ensure_g3_dir(workspace_dir: &Path) -> Result<PathBuf>`**
- Creates `.g3` directory in the workspace if it doesn't exist
- Returns the path to the `.g3` directory
- **`load_existing_requirements(workspace_dir: &Path) -> Result<Vec<String>>`**
- Loads requirements from `.g3/requirements.md` if the file exists
- Parses numbered requirements (format: `1. requirement text`)
- Returns empty vector if file doesn't exist
- **`save_requirements(workspace_dir: &Path, requirements: &[String]) -> Result<()>`**
- Saves accumulated requirements to `.g3/requirements.md`
- Creates `.g3` directory if needed
- Formats as markdown with numbered list
#### Modified Functions:
- **`run_accumulative_mode()`**
- Now loads existing requirements on startup
- Displays loaded requirements to user
- Initializes turn number based on existing requirements count
- Saves requirements after each new requirement is added
- Shows save confirmation message
- Updated `/requirements` command to show file location
### 2. Version Control (`.gitignore`)
- Added `.g3/` directory to `.gitignore`
- Prevents accidental commit of local requirements
- Users can opt-in to version control if desired
### 3. Documentation
#### New Documentation:
- **`docs/REQUIREMENTS_PERSISTENCE.md`**
- Comprehensive guide to the requirements persistence feature
- Usage examples and commands
- File format specification
- Use cases and best practices
- Comparison with traditional autonomous mode
#### Updated Documentation:
- **`README.md`**
- Added requirements persistence section to "Getting Started"
- Highlighted key benefits (resume, review, share)
- Added example showing `.g3/requirements.md` usage
### 4. Testing
- **`test_requirements.sh`**
- Simple test script for manual verification
- Creates test directory and provides instructions
## User-Facing Changes
### New Behavior
1. **Automatic Saving**
- Every requirement entered is immediately saved to `.g3/requirements.md`
- User sees confirmation: `💾 Saved to .g3/requirements.md`
2. **Automatic Loading**
- On startup, G3 checks for existing `.g3/requirements.md`
- If found, loads and displays requirements
- Shows: `📂 Loaded N existing requirement(s) from .g3/requirements.md`
3. **Enhanced `/requirements` Command**
- Now shows file location in output
- Format: `📋 Accumulated Requirements (saved to .g3/requirements.md):`
4. **Session Resumability**
- Users can exit and resume work later
- Requirements persist across sessions
- Turn numbering continues from previous session
### File Structure
```
my-project/
├── .g3/
│ └── requirements.md # NEW: Accumulated requirements
├── logs/ # Existing: Session logs
└── ... (project files)
```
### Requirements File Format
```markdown
# Project Requirements
1. First requirement
2. Second requirement
3. Third requirement
```
## Benefits
1. **Persistence**: No data loss if G3 crashes or is interrupted
2. **Transparency**: Always know what G3 is working on
3. **Resumability**: Pick up where you left off
4. **Documentation**: Requirements serve as project documentation
5. **Collaboration**: Share requirements with team members
6. **Auditability**: Track what was requested and when
## Backward Compatibility
- ✅ Fully backward compatible
- ✅ No breaking changes to existing functionality
- ✅ Works seamlessly with existing projects
- ✅ Graceful handling of missing `.g3` directory
- ✅ Error handling for file I/O issues
## Error Handling
- If `.g3/requirements.md` cannot be read: Shows warning, continues with empty requirements
- If `.g3/requirements.md` cannot be written: Shows warning, continues with in-memory requirements
- Non-blocking errors don't interrupt workflow
## Testing Checklist
- [x] Build succeeds without errors
- [ ] Manual test: Create new requirements in fresh directory
- [ ] Manual test: Resume session with existing requirements
- [ ] Manual test: `/requirements` command shows file location
- [ ] Manual test: Requirements file format is correct
- [ ] Manual test: Error handling for permission issues
- [ ] Manual test: `.g3` directory is created automatically
- [ ] Manual test: `.g3` directory is ignored by git
## Future Enhancements
Potential improvements for future versions:
1. Requirement status tracking (pending, in-progress, completed)
2. Requirement dependencies and ordering
3. Requirement templates and snippets
4. Integration with issue trackers
5. Requirement validation and linting
6. Export to other formats (JSON, YAML, etc.)
7. Requirement search and filtering
8. Requirement history and versioning
## Migration Guide
No migration needed! The feature works automatically:
1. Update to the new version
2. Run `g3` in any directory
3. Enter requirements as usual
4. Requirements are automatically saved to `.g3/requirements.md`
## Related Files
- `crates/g3-cli/src/lib.rs` - Core implementation
- `.gitignore` - Version control exclusion
- `docs/REQUIREMENTS_PERSISTENCE.md` - Feature documentation
- `README.md` - Updated getting started guide
- `test_requirements.sh` - Test script

View File

@@ -137,11 +137,6 @@ G3 is designed for:
The default interactive mode now uses **accumulative autonomous mode**, which combines the best of interactive and autonomous workflows: The default interactive mode now uses **accumulative autonomous mode**, which combines the best of interactive and autonomous workflows:
**Requirements Persistence**: All requirements are automatically saved to `.g3/requirements.md` in your workspace, allowing you to:
- Resume work across sessions
- Review what you've asked G3 to build
- Share requirements with team members
```bash ```bash
# Simply run g3 in any directory # Simply run g3 in any directory
g3 g3
@@ -157,9 +152,6 @@ requirement> create a simple web server in Python with Flask
# ... autonomous mode runs and implements it ... # ... autonomous mode runs and implements it ...
requirement> add a /health endpoint that returns JSON requirement> add a /health endpoint that returns JSON
# ... autonomous mode runs again with both requirements ... # ... autonomous mode runs again with both requirements ...
# Requirements are saved to .g3/requirements.md
# Use /requirements command to view them
``` ```
### Other Modes ### Other Modes

View File

@@ -15,6 +15,25 @@ max_tokens = 4096 # Per-request output limit (how many tokens the model can gen
temperature = 0.1 temperature = 0.1
use_oauth = true use_oauth = true
# Multiple OpenAI-compatible providers can be configured with custom names
# Each provider gets its own section under [providers.openai_compatible.<name>]
# [providers.openai_compatible.openrouter]
# api_key = "your-openrouter-api-key"
# model = "anthropic/claude-3.5-sonnet"
# base_url = "https://openrouter.ai/api/v1"
# max_tokens = 4096
# temperature = 0.1
# [providers.openai_compatible.groq]
# api_key = "your-groq-api-key"
# model = "llama-3.3-70b-versatile"
# base_url = "https://api.groq.com/openai/v1"
# max_tokens = 4096
# temperature = 0.1
# To use one of these providers, set default_provider to the name you chose:
# default_provider = "openrouter"
[agent] [agent]
fallback_default_max_tokens = 8192 fallback_default_max_tokens = 8192
# max_context_length: Override the context window size for all providers # max_context_length: Override the context window size for all providers

View File

@@ -439,51 +439,6 @@ pub async fn run() -> Result<()> {
Ok(()) Ok(())
} }
/// Ensure .g3 directory exists in the workspace
fn ensure_g3_dir(workspace_dir: &Path) -> Result<PathBuf> {
let g3_dir = workspace_dir.join(".g3");
if !g3_dir.exists() {
std::fs::create_dir_all(&g3_dir)?;
}
Ok(g3_dir)
}
/// Load existing requirements from .g3/requirements.md if it exists
fn load_existing_requirements(workspace_dir: &Path) -> Result<Vec<String>> {
let g3_dir = workspace_dir.join(".g3");
let requirements_file = g3_dir.join("requirements.md");
if !requirements_file.exists() {
return Ok(Vec::new());
}
let content = std::fs::read_to_string(&requirements_file)?;
// Parse the requirements from the markdown file
let mut requirements = Vec::new();
for line in content.lines() {
// Look for numbered requirements (e.g., "1. requirement text")
if let Some(stripped) = line.strip_prefix(|c: char| c.is_ascii_digit()) {
if let Some(req) = stripped.strip_prefix(". ") {
// Reconstruct the numbered format
let num = line.chars().take_while(|c| c.is_ascii_digit()).collect::<String>();
requirements.push(format!("{}. {}", num, req));
}
}
}
Ok(requirements)
}
/// Save accumulated requirements to .g3/requirements.md
fn save_requirements(workspace_dir: &Path, requirements: &[String]) -> Result<()> {
let g3_dir = ensure_g3_dir(workspace_dir)?;
let requirements_file = g3_dir.join("requirements.md");
let content = format!("# Project Requirements\n\n{}\n", requirements.join("\n"));
std::fs::write(&requirements_file, content)?;
Ok(())
}
/// Accumulative autonomous mode: accumulates requirements from user input /// Accumulative autonomous mode: accumulates requirements from user input
/// and runs autonomous mode after each input /// and runs autonomous mode after each input
async fn run_accumulative_mode( async fn run_accumulative_mode(
@@ -519,25 +474,9 @@ async fn run_accumulative_mode(
let _ = rl.load_history(history_path); let _ = rl.load_history(history_path);
} }
// Load existing requirements from .g3/requirements.md if it exists // Accumulated requirements stored in memory
let mut accumulated_requirements = match load_existing_requirements(&workspace_dir) { let mut accumulated_requirements = Vec::new();
Ok(reqs) if !reqs.is_empty() => { let mut turn_number = 0;
output.print("");
output.print(&format!("📂 Loaded {} existing requirement(s) from .g3/requirements.md", reqs.len()));
output.print("");
for req in &reqs {
output.print(&format!(" {}", req));
}
output.print("");
reqs
}
Ok(_) => Vec::new(),
Err(e) => {
output.print(&format!("⚠️ Warning: Could not load existing requirements: {}", e));
Vec::new()
}
};
let mut turn_number = accumulated_requirements.len();
loop { loop {
output.print(&format!("\n{}", "=".repeat(60))); output.print(&format!("\n{}", "=".repeat(60)));
@@ -580,8 +519,7 @@ async fn run_accumulative_mode(
if accumulated_requirements.is_empty() { if accumulated_requirements.is_empty() {
output.print("📋 No requirements accumulated yet"); output.print("📋 No requirements accumulated yet");
} else { } else {
let req_file = workspace_dir.join(".g3/requirements.md"); output.print("📋 Accumulated Requirements:");
output.print(&format!("📋 Accumulated Requirements (saved to {}):", req_file.display()));
output.print(""); output.print("");
for req in &accumulated_requirements { for req in &accumulated_requirements {
output.print(&format!(" {}", req)); output.print(&format!(" {}", req));
@@ -667,13 +605,6 @@ async fn run_accumulative_mode(
turn_number += 1; turn_number += 1;
accumulated_requirements.push(format!("{}. {}", turn_number, input)); accumulated_requirements.push(format!("{}. {}", turn_number, input));
// Save requirements to .g3/requirements.md
if let Err(e) = save_requirements(&workspace_dir, &accumulated_requirements) {
output.print(&format!("⚠️ Warning: Could not save requirements to .g3/requirements.md: {}", e));
} else {
output.print(&format!("💾 Saved to .g3/requirements.md"));
}
// Build the complete requirements document // Build the complete requirements document
let requirements_doc = format!( let requirements_doc = format!(
"# Project Requirements\n\n\ "# Project Requirements\n\n\

View File

@@ -14,6 +14,9 @@ pub struct Config {
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ProvidersConfig { pub struct ProvidersConfig {
pub openai: Option<OpenAIConfig>, pub openai: Option<OpenAIConfig>,
/// Multiple named OpenAI-compatible providers (e.g., openrouter, groq, etc.)
#[serde(default)]
pub openai_compatible: std::collections::HashMap<String, OpenAIConfig>,
pub anthropic: Option<AnthropicConfig>, pub anthropic: Option<AnthropicConfig>,
pub databricks: Option<DatabricksConfig>, pub databricks: Option<DatabricksConfig>,
pub embedded: Option<EmbeddedConfig>, pub embedded: Option<EmbeddedConfig>,
@@ -121,6 +124,7 @@ impl Default for Config {
Self { Self {
providers: ProvidersConfig { providers: ProvidersConfig {
openai: None, openai: None,
openai_compatible: std::collections::HashMap::new(),
anthropic: None, anthropic: None,
databricks: Some(DatabricksConfig { databricks: Some(DatabricksConfig {
host: "https://your-workspace.cloud.databricks.com".to_string(), host: "https://your-workspace.cloud.databricks.com".to_string(),
@@ -239,6 +243,7 @@ impl Config {
Self { Self {
providers: ProvidersConfig { providers: ProvidersConfig {
openai: None, openai: None,
openai_compatible: std::collections::HashMap::new(),
anthropic: None, anthropic: None,
databricks: None, databricks: None,
embedded: Some(EmbeddedConfig { embedded: Some(EmbeddedConfig {

View File

@@ -0,0 +1,256 @@
use crate::models::{InstanceStats, TurnInfo};
use anyhow::{Context, Result};
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use std::fs;
use std::path::Path;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct LogEntry {
pub timestamp: Option<DateTime<Utc>>,
pub role: Option<String>,
pub content: Option<String>,
pub tool_calls: Option<Vec<Value>>,
pub raw: Value,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ChatMessage {
pub role: String,
pub content: String,
pub timestamp: Option<DateTime<Utc>>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ToolCall {
pub name: String,
pub parameters: Value,
pub result: Option<String>,
pub timestamp: Option<DateTime<Utc>>,
}
pub struct LogParser;
impl LogParser {
/// Parse logs from a workspace directory
pub fn parse_logs(workspace: &Path) -> Result<Vec<LogEntry>> {
let logs_dir = workspace.join("logs");
if !logs_dir.exists() {
return Ok(Vec::new());
}
let mut entries = Vec::new();
// Read all JSON log files
for entry in fs::read_dir(&logs_dir).context("Failed to read logs directory")? {
let entry = entry?;
let path = entry.path();
if path.extension().and_then(|s| s.to_str()) == Some("json") {
if let Ok(content) = fs::read_to_string(&path) {
if let Ok(json) = serde_json::from_str::<Value>(&content) {
// Try to parse as a log session
if let Some(messages) = json.get("messages").and_then(|m| m.as_array()) {
for msg in messages {
entries.push(LogEntry {
timestamp: msg.get("timestamp")
.and_then(|t| t.as_str())
.and_then(|s| DateTime::parse_from_rfc3339(s).ok())
.map(|dt| dt.with_timezone(&Utc)),
role: msg.get("role")
.and_then(|r| r.as_str())
.map(String::from),
content: msg.get("content")
.and_then(|c| c.as_str())
.map(String::from),
tool_calls: msg.get("tool_calls")
.and_then(|tc| tc.as_array())
.map(|arr| arr.clone()),
raw: msg.clone(),
});
}
}
}
}
}
}
// Sort by timestamp
entries.sort_by(|a, b| {
match (&a.timestamp, &b.timestamp) {
(Some(t1), Some(t2)) => t1.cmp(t2),
(Some(_), None) => std::cmp::Ordering::Less,
(None, Some(_)) => std::cmp::Ordering::Greater,
(None, None) => std::cmp::Ordering::Equal,
}
});
Ok(entries)
}
/// Extract chat messages from log entries
pub fn extract_chat_messages(entries: &[LogEntry]) -> Vec<ChatMessage> {
entries
.iter()
.filter_map(|entry| {
let role = entry.role.clone()?;
let content = entry.content.clone()?;
Some(ChatMessage {
role,
content,
timestamp: entry.timestamp,
})
})
.collect()
}
/// Extract tool calls from log entries
pub fn extract_tool_calls(entries: &[LogEntry]) -> Vec<ToolCall> {
let mut tool_calls = Vec::new();
for entry in entries {
if let Some(calls) = &entry.tool_calls {
for call in calls {
if let Some(name) = call.get("name").and_then(|n| n.as_str()) {
tool_calls.push(ToolCall {
name: name.to_string(),
parameters: call.get("parameters")
.cloned()
.unwrap_or(Value::Object(serde_json::Map::new())),
result: call.get("result")
.and_then(|r| r.as_str())
.map(String::from),
timestamp: entry.timestamp,
});
}
}
}
}
tool_calls
}
}
pub struct StatsAggregator;
impl StatsAggregator {
/// Aggregate statistics from log entries
pub fn aggregate_stats(
entries: &[LogEntry],
start_time: DateTime<Utc>,
is_ensemble: bool,
) -> InstanceStats {
let total_tokens = Self::count_tokens(entries);
let tool_calls = Self::count_tool_calls(entries);
let errors = Self::count_errors(entries);
let duration_secs = if let Some(last_entry) = entries.last() {
if let Some(last_time) = last_entry.timestamp {
(last_time - start_time).num_seconds().max(0) as u64
} else {
(Utc::now() - start_time).num_seconds().max(0) as u64
}
} else {
(Utc::now() - start_time).num_seconds().max(0) as u64
};
let turns = if is_ensemble {
Some(Self::extract_turns(entries))
} else {
None
};
InstanceStats {
total_tokens,
tool_calls,
errors,
duration_secs,
turns,
}
}
/// Get the latest message content from log entries
pub fn get_latest_message(entries: &[LogEntry]) -> Option<String> {
entries
.iter()
.rev()
.find(|entry| entry.role.as_deref() == Some("assistant"))
.and_then(|entry| entry.content.clone())
.or_else(|| {
entries
.iter()
.rev()
.find(|entry| entry.content.is_some())
.and_then(|entry| entry.content.clone())
})
}
fn count_tokens(entries: &[LogEntry]) -> u64 {
// Try to extract token counts from metadata
entries
.iter()
.filter_map(|entry| {
entry.raw.get("usage")
.and_then(|u| u.get("total_tokens"))
.and_then(|t| t.as_u64())
})
.sum()
}
fn count_tool_calls(entries: &[LogEntry]) -> u64 {
entries
.iter()
.filter_map(|entry| entry.tool_calls.as_ref())
.map(|calls| calls.len() as u64)
.sum()
}
fn count_errors(entries: &[LogEntry]) -> u64 {
entries
.iter()
.filter(|entry| {
entry.raw.get("error").is_some()
|| entry.content.as_ref().map(|c| c.to_lowercase().contains("error")).unwrap_or(false)
})
.count() as u64
}
fn extract_turns(entries: &[LogEntry]) -> Vec<TurnInfo> {
// Simple implementation: group consecutive assistant messages as turns
let mut turns = Vec::new();
let mut current_turn_start: Option<DateTime<Utc>> = None;
let mut turn_count = 0;
for entry in entries {
if entry.role.as_deref() == Some("assistant") {
if current_turn_start.is_none() {
current_turn_start = entry.timestamp;
turn_count += 1;
}
} else if entry.role.as_deref() == Some("user") {
if let Some(start) = current_turn_start {
if let Some(end) = entry.timestamp {
let duration = (end - start).num_seconds().max(0) as u64;
turns.push(TurnInfo {
agent: format!("agent-{}", turn_count),
duration_secs: duration,
status: "completed".to_string(),
color: Self::get_turn_color(turn_count),
});
}
current_turn_start = None;
}
}
}
turns
}
fn get_turn_color(turn_number: usize) -> String {
let colors = vec!["blue", "green", "purple", "orange", "pink", "teal"];
colors[turn_number % colors.len()].to_string()
}
}

View File

@@ -3,7 +3,7 @@ use anyhow::Result;
use chrono::{DateTime, Utc}; use chrono::{DateTime, Utc};
use std::path::PathBuf; use std::path::PathBuf;
use sysinfo::{System, Pid, Process}; use sysinfo::{System, Pid, Process};
use tracing::{debug, warn}; use tracing::{debug, info, warn};
pub struct ProcessDetector { pub struct ProcessDetector {
system: System, system: System,
@@ -17,7 +17,11 @@ impl ProcessDetector {
} }
pub fn detect_instances(&mut self) -> Result<Vec<Instance>> { pub fn detect_instances(&mut self) -> Result<Vec<Instance>> {
self.system.refresh_processes(); info!("Scanning for g3 processes...");
// Refresh all processes to ensure we catch newly started ones
// Using refresh_all() instead of just refresh_processes() to ensure
// we get complete information about new processes
self.system.refresh_all();
let mut instances = Vec::new(); let mut instances = Vec::new();
// Find all g3 processes // Find all g3 processes
@@ -33,7 +37,7 @@ impl ProcessDetector {
} }
} }
debug!("Detected {} g3 instances", instances.len()); info!("Detected {} g3 instances", instances.len());
Ok(instances) Ok(instances)
} }
@@ -45,24 +49,27 @@ impl ProcessDetector {
) -> Option<Instance> { ) -> Option<Instance> {
let cmd_str = cmd.join(" "); let cmd_str = cmd.join(" ");
// Exclude g3-console itself
if cmd_str.contains("g3-console") {
return None;
}
// Check if this is a g3 binary (more comprehensive check) // Check if this is a g3 binary (more comprehensive check)
let is_g3_binary = cmd.get(0).map(|s| { let is_g3_binary = cmd.get(0).map(|s| {
s.ends_with("g3") || s.ends_with("/g3") || s.contains("/target/release/g3") || s.contains("/target/debug/g3") (s.ends_with("g3") || s.ends_with("/g3") || s.contains("/target/release/g3") || s.contains("/target/debug/g3"))
&& !s.contains("g3-") // Exclude other g3-* binaries
}).unwrap_or(false); }).unwrap_or(false);
// Check if this is cargo run with g3 // Check if this is cargo run with g3 (not g3-console or other variants)
let is_cargo_run = cmd.get(0).map(|s| s.contains("cargo")).unwrap_or(false) && cmd.iter().any(|s| s == "run"); let is_cargo_run = cmd.get(0).map(|s| s.contains("cargo")).unwrap_or(false)
&& cmd.iter().any(|s| s == "run")
&& !cmd_str.contains("g3-console");
// Also check if any part of the command line contains g3-related patterns // Also check if command line has g3-specific flags
let has_g3_pattern = cmd_str.contains("g3 ") let has_g3_flags = cmd_str.contains("--workspace") || cmd_str.contains("--autonomous");
|| cmd_str.contains("/g3 ")
|| cmd_str.contains("g3-")
|| cmd_str.ends_with("g3")
|| cmd_str.contains("--workspace") // g3-specific flag
|| cmd_str.contains("--autonomous"); // g3-specific flag
// Accept if it's a g3 binary, cargo run with g3 patterns, or has g3-specific flags // Accept if it's a g3 binary or cargo run with g3, and has typical g3 patterns
let is_g3_process = is_g3_binary || (is_cargo_run && has_g3_pattern) || has_g3_pattern; let is_g3_process = is_g3_binary || (is_cargo_run && has_g3_flags);
if !is_g3_process { if !is_g3_process {
return None; return None;
@@ -165,7 +172,7 @@ impl ProcessDetector {
} }
pub fn get_process_status(&mut self, pid: u32) -> Option<InstanceStatus> { pub fn get_process_status(&mut self, pid: u32) -> Option<InstanceStatus> {
self.system.refresh_processes(); self.system.refresh_all();
let sysinfo_pid = Pid::from_u32(pid); let sysinfo_pid = Pid::from_u32(pid);
if self.system.process(sysinfo_pid).is_some() { if self.system.process(sysinfo_pid).is_some() {

View File

@@ -15,7 +15,7 @@
<div id="app"> <div id="app">
<header class="header"> <header class="header">
<div class="header-content"> <div class="header-content">
<h1 class="header-title">G3 Console</h1> <h1 class="header-title">G3 Console <span id="live-indicator" class="live-indicator" title="Scanning for processes every 3 seconds">● LIVE</span></h1>
<div class="header-actions"> <div class="header-actions">
<button id="new-run-btn" class="btn btn-primary">+ New Run</button> <button id="new-run-btn" class="btn btn-primary">+ New Run</button>
<button id="theme-toggle" class="btn btn-secondary">🌙</button> <button id="theme-toggle" class="btn btn-secondary">🌙</button>

View File

@@ -6,6 +6,7 @@ const router = {
currentInstanceId: null, currentInstanceId: null,
initialized: false, initialized: false,
renderInProgress: false, renderInProgress: false,
REFRESH_INTERVAL_MS: 3000, // Refresh every 3 seconds for live updates
init() { init() {
console.log('[Router] init() called'); console.log('[Router] init() called');
@@ -84,6 +85,9 @@ const router = {
this.renderInProgress = true; this.renderInProgress = true;
try { try {
// Flash live indicator
this.flashLiveIndicator();
// Check if we already have a container for instances // Check if we already have a container for instances
let instancesList = container.querySelector('.instances-list'); let instancesList = container.querySelector('.instances-list');
const isInitialLoad = !instancesList; const isInitialLoad = !instancesList;
@@ -167,11 +171,11 @@ const router = {
// Schedule next refresh only if still on home route // Schedule next refresh only if still on home route
if (this.currentRoute === '/' || this.currentRoute === '') { if (this.currentRoute === '/' || this.currentRoute === '') {
console.log('[Router] Scheduling auto-refresh in 5 seconds'); console.log(`[Router] Scheduling auto-refresh in ${this.REFRESH_INTERVAL_MS}ms`);
this.refreshTimeout = setTimeout(() => { this.refreshTimeout = setTimeout(() => {
console.log('[Router] Auto-refresh triggered'); console.log('[Router] Auto-refresh triggered');
this.renderHome(container); this.renderHome(container);
}, 5000); }, this.REFRESH_INTERVAL_MS);
} }
} catch (error) { } catch (error) {
console.error('[Router] Error in renderHome:', error); console.error('[Router] Error in renderHome:', error);
@@ -187,12 +191,26 @@ const router = {
} }
}, },
flashLiveIndicator() {
const indicator = document.getElementById('live-indicator');
if (indicator) {
indicator.style.animation = 'none';
// Force reflow
void indicator.offsetWidth;
indicator.style.animation = null;
indicator.style.opacity = '1';
}
},
async renderDetail(container, id) { async renderDetail(container, id) {
console.log('[Router] renderDetail called for', id); console.log('[Router] renderDetail called for', id);
this.currentInstanceId = id; this.currentInstanceId = id;
try { try {
// Flash live indicator
this.flashLiveIndicator();
// Check if we already have a detail view for this instance // Check if we already have a detail view for this instance
let detailView = container.querySelector('.detail-view'); let detailView = container.querySelector('.detail-view');
const isInitialLoad = !detailView || detailView.getAttribute('data-instance-id') !== id; const isInitialLoad = !detailView || detailView.getAttribute('data-instance-id') !== id;

View File

@@ -64,6 +64,22 @@ body {
color: var(--text-primary); color: var(--text-primary);
} }
.live-indicator {
font-size: 0.625rem; /* 75% of 0.833rem */
font-weight: 600;
color: var(--success);
margin-left: 0.75rem;
display: inline-flex;
align-items: center;
gap: 0.25rem;
animation: pulse 2s ease-in-out infinite;
}
@keyframes pulse {
0%, 100% { opacity: 1; }
50% { opacity: 0.5; }
}
.header-actions { .header-actions {
display: flex; display: flex;
gap: 1rem; gap: 1rem;

View File

@@ -875,6 +875,21 @@ impl<W: UiWriter> Agent<W> {
} }
} }
// Register OpenAI-compatible providers (e.g., OpenRouter, Groq, etc.)
for (name, openai_config) in &config.providers.openai_compatible {
if providers_to_register.contains(name) {
let openai_provider = g3_providers::OpenAIProvider::new_with_name(
name.clone(),
openai_config.api_key.clone(),
Some(openai_config.model.clone()),
openai_config.base_url.clone(),
openai_config.max_tokens,
openai_config.temperature,
)?;
providers.register(openai_provider);
}
}
// Register Anthropic provider if configured AND it's the default provider // Register Anthropic provider if configured AND it's the default provider
if let Some(anthropic_config) = &config.providers.anthropic { if let Some(anthropic_config) = &config.providers.anthropic {
if providers_to_register.contains(&"anthropic".to_string()) { if providers_to_register.contains(&"anthropic".to_string()) {

View File

@@ -22,6 +22,7 @@ pub struct OpenAIProvider {
base_url: String, base_url: String,
max_tokens: Option<u32>, max_tokens: Option<u32>,
_temperature: Option<f32>, _temperature: Option<f32>,
name: String,
} }
impl OpenAIProvider { impl OpenAIProvider {
@@ -31,6 +32,24 @@ impl OpenAIProvider {
base_url: Option<String>, base_url: Option<String>,
max_tokens: Option<u32>, max_tokens: Option<u32>,
temperature: Option<f32>, temperature: Option<f32>,
) -> Result<Self> {
Self::new_with_name(
"openai".to_string(),
api_key,
model,
base_url,
max_tokens,
temperature,
)
}
pub fn new_with_name(
name: String,
api_key: String,
model: Option<String>,
base_url: Option<String>,
max_tokens: Option<u32>,
temperature: Option<f32>,
) -> Result<Self> { ) -> Result<Self> {
Ok(Self { Ok(Self {
client: Client::new(), client: Client::new(),
@@ -39,6 +58,7 @@ impl OpenAIProvider {
base_url: base_url.unwrap_or_else(|| "https://api.openai.com/v1".to_string()), base_url: base_url.unwrap_or_else(|| "https://api.openai.com/v1".to_string()),
max_tokens, max_tokens,
_temperature: temperature, _temperature: temperature,
name,
}) })
} }
@@ -353,7 +373,7 @@ impl LLMProvider for OpenAIProvider {
} }
fn name(&self) -> &str { fn name(&self) -> &str {
"openai" &self.name
} }
fn model(&self) -> &str { fn model(&self) -> &str {
@@ -492,4 +512,4 @@ struct OpenAIDeltaToolCall {
struct OpenAIDeltaFunction { struct OpenAIDeltaFunction {
name: Option<String>, name: Option<String>,
arguments: Option<String>, arguments: Option<String>,
} }

View File

@@ -1,210 +0,0 @@
# Requirements Persistence in Accumulative Mode
## Overview
In accumulative autonomous mode (`--auto` or default mode), G3 now automatically persists your requirements to a local `.g3/requirements.md` file. This provides several benefits:
1. **Persistence across sessions**: Your requirements are saved and can be resumed later
2. **Version control friendly**: Requirements are stored in a readable markdown format
3. **Easy review**: You can view and edit requirements directly in the file
4. **Transparency**: Always know what G3 is working on
## How It Works
### Automatic Saving
When you run G3 in accumulative mode:
```bash
g3
```
Each requirement you enter is automatically:
1. Added to the accumulated requirements list
2. Saved to `.g3/requirements.md` in your workspace
3. Used for the autonomous implementation run
### File Format
The `.g3/requirements.md` file uses a simple numbered list format:
```markdown
# Project Requirements
1. Create a simple web server in Python with Flask
2. Add a /health endpoint that returns JSON
3. Add logging for all requests
```
### Loading Existing Requirements
When you start G3 in a directory that already has a `.g3/requirements.md` file, it will:
1. Automatically load the existing requirements
2. Display them on startup
3. Continue numbering from where you left off
Example output:
```
📂 Loaded 3 existing requirement(s) from .g3/requirements.md
1. Create a simple web server in Python with Flask
2. Add a /health endpoint that returns JSON
3. Add logging for all requests
============================================================
📝 Turn 4 - What's next? (add more requirements or refinements)
============================================================
requirement>
```
## Commands
### View Requirements
Use the `/requirements` command to view all accumulated requirements:
```
requirement> /requirements
📋 Accumulated Requirements (saved to .g3/requirements.md):
1. Create a simple web server in Python with Flask
2. Add a /health endpoint that returns JSON
3. Add logging for all requests
```
### Other Commands
- `/help` - Show all available commands
- `/chat` - Switch to interactive chat mode (preserves requirements context)
- `exit` or `quit` - Exit the session
## File Location
The requirements file is stored at:
```
<workspace>/.g3/requirements.md
```
Where `<workspace>` is your current working directory.
## Version Control
The `.g3/` directory is automatically added to `.gitignore`, so your requirements won't be committed to version control by default. If you want to track requirements in git, you can:
1. Remove `.g3/` from `.gitignore`
2. Commit the `.g3/requirements.md` file
This can be useful for:
- Sharing requirements with team members
- Tracking requirement evolution over time
- Documenting project goals
## Manual Editing
You can manually edit `.g3/requirements.md` if needed. G3 will parse the file and load any numbered requirements (format: `1. requirement text`).
**Note**: Make sure to maintain the numbered list format for proper parsing.
## Error Handling
If G3 cannot save or load requirements, it will:
1. Display a warning message
2. Continue operating with in-memory requirements
3. Not interrupt your workflow
Example:
```
⚠️ Warning: Could not save requirements to .g3/requirements.md: Permission denied
```
## Use Cases
### Resuming Work
```bash
# Day 1: Start a project
cd my-project
g3
requirement> Create a REST API with user authentication
# ... work happens ...
exit
# Day 2: Resume work
cd my-project
g3
# G3 automatically loads previous requirements
requirement> Add password reset functionality
```
### Reviewing Progress
```bash
# Check what you've asked G3 to build
cat .g3/requirements.md
# Or use the command within G3
requirement> /requirements
```
### Sharing Requirements
```bash
# Share requirements with a team member
cp .g3/requirements.md requirements-backup.md
# Or commit to version control
git add .g3/requirements.md
git commit -m "Add project requirements"
```
## Implementation Details
### Functions
- `ensure_g3_dir()` - Creates `.g3` directory if it doesn't exist
- `load_existing_requirements()` - Loads requirements from `.g3/requirements.md`
- `save_requirements()` - Saves requirements to `.g3/requirements.md`
### File Structure
```
my-project/
├── .g3/
│ └── requirements.md # Accumulated requirements
├── logs/ # Session logs (existing)
└── ... (your project files)
```
## Benefits
1. **No data loss**: Requirements are persisted even if G3 crashes or is interrupted
2. **Transparency**: Always know what G3 is working on
3. **Resumability**: Pick up where you left off in any session
4. **Documentation**: Requirements serve as project documentation
5. **Collaboration**: Share requirements with team members
6. **Auditability**: Track what was requested and when
## Comparison with Traditional Autonomous Mode
| Feature | Accumulative Mode | Traditional `--autonomous` |
|---------|------------------|---------------------------|
| Requirements file | `.g3/requirements.md` | `requirements.md` (root) |
| Auto-save | ✅ Yes | ❌ No (manual edit) |
| Interactive | ✅ Yes | ❌ No |
| Incremental | ✅ Yes | ❌ No (one-shot) |
| Resume support | ✅ Yes | ⚠️ Manual |
## Future Enhancements
Potential future improvements:
- Requirement status tracking (pending, in-progress, completed)
- Requirement dependencies and ordering
- Requirement templates and snippets
- Integration with issue trackers
- Requirement validation and linting

View File

@@ -1,36 +0,0 @@
#!/bin/bash
# Test script for .g3/requirements.md feature
set -e
echo "Testing .g3/requirements.md feature..."
echo ""
# Create a test directory
TEST_DIR="/tmp/g3_test_$$"
mkdir -p "$TEST_DIR"
cd "$TEST_DIR"
echo "Test directory: $TEST_DIR"
echo ""
# Create a simple test by simulating user input
echo "Testing requirement persistence..."
echo ""
# Check if .g3 directory gets created
if [ ! -d ".g3" ]; then
echo "✅ .g3 directory does not exist yet (expected)"
else
echo "❌ .g3 directory already exists (unexpected)"
fi
echo ""
echo "Test directory created at: $TEST_DIR"
echo "You can manually test by running:"
echo " cd $TEST_DIR"
echo " g3"
echo ""
echo "Then enter a requirement and check if .g3/requirements.md is created."
echo ""