Removed the persistent_chrome config flag - chromedriver is now always
kept running after webdriver_quit. This eliminates startup latency for
subsequent WebDriver sessions.
Safaridriver is still killed on quit since it doesn't benefit from
persistence in the same way.
Updated quit message to correctly indicate chromedriver remains running.
When webdriver_start is called, now checks if chromedriver is already
running on the configured port and reuses it instead of spawning a new
process. This significantly reduces startup time for subsequent sessions.
New config option:
[webdriver]
persistent_chrome = true # Keep chromedriver running between sessions
When enabled, webdriver_quit closes the browser session but leaves
chromedriver running for reuse by the next session.
When --safari was passed, Chrome diagnostics were still running because
--chrome-headless defaults to true. This caused the CLI to hang while
running diagnostics for a browser that wouldn't be used.
Now skip Chrome diagnostics when --safari is explicitly set.
Simplify print_context_thinning to just print the message directly.
The message already contains proper ANSI formatting from context_window.rs.
Removes the flash animation and 'Context optimized successfully' footer.
Change format from verbose emoji-based message to cleaner status line:
Before: ✨🥒 Context thinned at 70%: 7 tool results, ~33839 chars saved ✨
After: g3: thinning context ... 70% -> 40% ... [done]
The new format shows before/after percentages and uses bold green for
'g3:' and '[done]' to match other status messages.
Also removes unused emoji() and label() methods from ThinScope.
The Anthropic API has a 5MB limit on base64-encoded images, not raw file
size. Base64 encoding increases size by ~33% (4/3 ratio), so a 4MB raw
image becomes ~5.3MB encoded, exceeding the limit.
Changed MAX_IMAGE_SIZE from 5MB to ~3.75MB (5MB * 3/4) to trigger
resizing before the base64-encoded result exceeds the API limit.
Also updated target resize size to 3.6MB to leave margin.
Images >= 5MB are now automatically resized to < 4.9MB using ImageMagick
before being sent to the LLM. This prevents API errors from oversized images.
- Uses iterative quality/scale reduction to find optimal size
- Converts to JPEG for better compression
- Shows original and resized size in terminal output (e.g., '6.2 MB → 4.1 MB (resized)')
- Falls back to original if ImageMagick fails or isn't available
Adds tests to verify that:
- All streaming chunks are processed before control returns to caller
- Both tool calls in a multi-tool-call stream are executed
- The finished signal properly terminates stream processing
Also adds Agent::new_for_test() to allow injecting mock providers.
The response was being printed twice: once during streaming and again
after task completion. Removed the redundant print_smart() call since
streaming already displays the response in real-time.
When running g3 --agent butler, the process title is now "g3 [butler]"
which shows up in ps, Activity Monitor, top, etc.
Uses the proctitle crate for cross-platform support.
When running g3 --agent <name> --chat:
- Skip per-turn memory checkpoint calls (too onerous)
- Call memory checkpoint once when exiting (Ctrl-D)
When running g3 --agent <name> (single-shot):
- Preserve existing behavior: call memory checkpoint after each turn
This keeps the auto-memory feature useful without being intrusive
in interactive agent sessions.
When running g3 --agent <name> --chat, the output is now minimal:
- Workspace path (-> ~/path)
- Status line (README/AGENTS.md/Memory)
- Context progress bar
- Prompt (g3>)
Skipped in this mode:
- Session resume prompts
- "agent mode | name (source)" header
- "g3 programming agent" welcome
- Provider info display
- Language guidance messages
Added from_agent_mode parameter to run_interactive() to control
whether verbose welcome and session resume are shown.
The JSON filter only suppresses tool calls at line boundaries. When
"Memory checkpoint: " was printed without a trailing newline, the LLM
response `{"tool": "remember", ...}` appeared on the same line and
leaked through to the UI.
Fix:
- Add trailing newline to "Memory checkpoint:" message
- Reset JSON filter state before streaming the response
Added test: test_tool_call_not_at_line_start_passes_through
Documents the filter behavior and references the fix location.
- Remove chat from conflicts_with_all for --agent flag
- Add chat parameter to run_agent_mode()
- Run interactive loop instead of single task when --chat is passed
Usage: g3 --agent <name> --chat
- Shell outputs > 8KB are truncated to first 500 chars
- Full output saved to .g3/sessions/<session_id>/tools/shell_stdout_<id>.txt
- LLM can use read_file with start/end to paginate through large outputs
- read_file now uses seek() for O(1) random access instead of reading entire file
- UTF-8 safe: reads extra bytes at boundaries to find valid char positions
- Falls back to lossy conversion for binary files (no panics)
Files changed:
- paths.rs: get_tools_output_dir(), generate_short_id()
- shell.rs: truncate_large_output() integration
- file_ops.rs: seek-based read_file_range() helper
- New test: read_file_utf8_test.rs
- Replace verbose auto-accept messages with single line
- Format: 'studio: session <id> ... [merged]'
- Refactor cmd_accept to use accept_session() with configurable prefix
- Remove 'completed successfully' and 'Auto-accepting' messages
- Replace verbose multi-line output with single line
- Format: 'studio: new session <id>'
- 'studio:' in bold green, session id in inline-code orange (RGB 216,177,114)
- Remove separator lines and 'Starting g3 agent' message
- Change verbose emoji messages to minimal format
- Print '> session <id> ...' first, then status after operation completes
- 'merged' shown in bold green
- 'discarded' shown in bold yellow
- Fix aliasing issue where resolve_max_tokens() used fallback_default_max_tokens
(8192) instead of provider-specific defaults
- Update fallback_default_max_tokens from 8192 to 32000
- Set provider-specific max_tokens defaults:
- Anthropic: 32000
- OpenAI: 32000 (was 16000)
- Databricks: 32000 (was 50000, now matches Anthropic as passthru)
- Embedded: 2048
- Context window lengths unchanged:
- OpenAI: 400,000
- Anthropic: 200,000
- Databricks (Claude): 200,000
This fixes the 'LLM response was cut off due to max_tokens limit' error
in agent mode that occurred because 8192 was being used instead of 32000.
New commands:
- studio cli (alias: c) - Start a new interactive g3 session in an isolated worktree
- studio resume <id> (alias: r) - Resume a paused interactive session
- Bare 'studio' now defaults to 'studio cli'
Session changes:
- Added SessionStatus::Paused for sessions that can be resumed
- Added SessionType enum (OneShot, Interactive) for future use
- Interactive sessions use inherited stdio for direct TTY access
- Sessions are marked as Paused when user exits g3
Workflow:
1. studio # creates worktree, runs g3 interactively
2. (work in g3, exit when done)
3. studio resume <id> # continue working
4. studio accept <id> # merge to main when finished
The print_todo_compact() function was missing the call to clear the
streaming hint line before printing the final tool output. This caused
the tool name to appear twice when the hint line wasn't cleared:
● todo_read ● todo_read | empty
Added the missing handle_hint(ToolParsingHint::Complete) call to match
the behavior of print_tool_compact().
When --accept was passed after positional args (e.g., 'studio run --agent
carmack task --accept'), clap's trailing_var_arg captured it as part of
g3_args instead of parsing it as the studio flag. This caused g3 to error
with 'unexpected argument --accept'.
- Extract filter_accept_flag() helper to detect and remove --accept from
trailing args
- Set auto_accept=true if --accept found in either position
- Add 5 unit tests for the filtering logic
Agent: carmack
Changes:
- streaming_parser.rs: Unified find_first/last_tool_call_start into single
find_tool_call_start with SearchDirection enum, reducing duplication.
Simplified is_json_invalidated from 45 to 20 lines with clearer logic.
Fixed redundant !escape_next check in find_complete_json_object_end.
- filter_json.rs: Simplified check_tool_pattern from 40 to 24 lines.
Replaced repetitive prefix checks with loop over ["t", "to", "too", "tool"].
Reduced trailing return statements with direct expression returns.
- ui_writer_impl.rs: Added ansi module for duration color constants.
Simplified duration_color function by removing redundant comments.
- language_prompts.rs: Fixed test assertions to match actual prompt content
("obvious, readable Racket" instead of "RACKET-SPECIFIC GUIDANCE").
All 174+ tests pass. No behavior changes.
- Add ToolParsingHint enum (Detected/Active/Complete) for UI feedback
- New UiWriter methods: print_tool_streaming_hint(), print_tool_streaming_active()
- Refactor ConsoleUiWriter state to use atomics in ParsingHintState
- Add tool_call_streaming field to CompletionChunk for provider hints
- Anthropic provider sends streaming hints when tool name detected
- New streaming helpers: make_tool_streaming_hint(), make_tool_streaming_active()
Parser improvements:
- Add is_json_invalidated() to detect false positive tool patterns
- Fix tool result poisoning when file contents contain partial JSON
- Unescaped newlines in strings or prose after JSON invalidates detection
User sees ' ● tool_name |' immediately when tool call starts streaming,
with blinking indicator while args are received.
When partial JSON tool call patterns appear in LLM output (e.g., from
quoting file content), the parser would incorrectly report them as
"incomplete tool calls", triggering auto-continue loops.
Fix: Added is_json_invalidated() to detect when partial JSON has been
invalidated by subsequent content that cannot be valid JSON:
- Unescaped newline inside a string (invalid JSON)
- Newline followed by prose text outside a string
The check is only applied to incomplete JSON - complete tool calls
with trailing text are still correctly detected.
Added 6 new tests covering:
- Tool results with partial JSON patterns
- LLM quoting file content inline vs on own line
- Comment prefixes (// # -- etc) with partial patterns
- Real incomplete tool calls (should still be detected)
The streaming parser was incorrectly detecting tool call patterns that
appeared inline in prose (e.g., when explaining the format), causing
g3 to return control mid-task.
Fix: Modified find_first_tool_call_start() and find_last_tool_call_start()
to only recognize patterns that appear on their own line (at start of
buffer or after newline with only whitespace before the pattern).
Changes:
- Added is_on_own_line() helper to check line-boundary conditions
- Updated detection methods to skip inline patterns
- Removed sanitize_inline_tool_patterns() and LBRACE_HOMOGLYPH (no longer needed)
- Rewrote tests for new behavior
- Added streaming_repro tests that use process_chunk() to verify the exact bug scenario
28 tests covering: streaming repro, line boundaries, Unicode, code contexts, edge cases
Rust-specific readability guidance for the carmack agent including:
- let...else example for shallow control flow
- Async: don't block the runtime (tokio::fs, spawn_blocking, Send)
- Visibility: prefer pub(crate), private fields with accessors
- Generics: impl Trait over explicit params, avoid complex where clauses
- Improved iterator guidance: if you need a comment, use a loop
- UTF-8 string slicing warnings
- Ownership/lifetime pragmatism
- Anti-patterns: no macros/typestate/proc-macros unless already in repo
Also adds Rust detection to LANGUAGE_PROMPTS (empty base prompt,
agent-specific prompts handle the guidance).
Automatically accept the session after g3 completes successfully,
but only if there are commits on the branch.
Changes:
- Add --accept flag to Run command (stripped, not passed to g3)
- Add has_commits_on_branch() helper using git rev-list --count
- Auto-accept triggers merge to main and cleanup when:
1. g3 exits successfully (exit code 0)
2. Branch has commits ahead of main
- Show warning if --accept set but no commits exist
Usage: studio run --agent carmack --accept
When running in agent mode (e.g., --agent carmack) in a workspace with
detected languages, inject agent+language-specific prompts from
prompts/langs/<agent>.<lang>.md at the end of the system prompt.
Changes:
- Add AGENT_LANGUAGE_PROMPTS static array for compile-time embedding
- Add get_agent_language_prompt() to look up specific agent+lang combos
- Add get_agent_language_prompts_for_workspace_with_langs() that returns
both content and matched languages for display
- Update agent_mode.rs to inject prompts and show which languages loaded
- Display format: '✓ carmack: racket language guidance'
- Add tests for new functionality
Uses the same detect_languages() mechanism as regular language prompts
to avoid code-path aliasing.
Use try_init() instead of init() for tracing subscriber setup to
gracefully handle cases where a global subscriber is already set.
This fixes a panic in the scout agent subprocess when spawned by the
research tool, where a dependency may have already initialized tracing.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Move initialize_logging() call to run immediately after CLI parsing,
before any mode checks. This ensures the --verbose flag works correctly
in planning mode, which previously bypassed logging initialization.
Previously, planning mode would return early before initialize_logging()
was called, causing verbose output to be silently ignored.
- Add language_prompts module that auto-detects programming languages in workspace
- Scan for language files with depth limit (2) to inject relevant toolchain prompts
- Add prompts/langs/ directory for language-specific markdown files
- Include Racket/raco toolchain guidance as first language prompt
- Update combine_project_content() to accept language_content parameter
- Integrate language detection into main CLI flow and agent mode
- Update project memory with new feature documentation
LLMs were prefixing shell commands with `cd <workspace> &&` unnecessarily,
wasting tokens and cluttering CLI display. Added clear guidance in the
shell tool description that commands already execute in the working directory.
• Agent prompts are now embedded within the g3 binary
• README.md - Added new "Agent Mode" section documenting:
• All 7 built-in agents with their focus areas
• Usage examples (--list-agents, --agent <name>)
• How to create custom workspace agents
Behavior
1. Workspace agents take priority - If agents/<name>.md exists in the workspace, it's used
2. Embedded fallback - If no workspace agent exists, the embedded version is used
3. Portability - g3 binary now works on any repo without needing the agents/ directory
4. Discoverability - g3 --list-agents shows all available agents and their source
When users explicitly pass --new-session, they want a fresh session.
Previously g3 would still prompt to resume an existing session.
Now the resume check is skipped entirely when the flag is set.
- Rename take_screenshot -> screenshot, code_coverage -> coverage (shorter names)
- Align | character across all compact tools (pad to 11 chars for str_replace)
- Make code_search a compact tool with summary display
- Show language and search name in code_search output (e.g., rust:"find structs")
- Add format_code_search_summary() to extract match/file counts from JSON response
Remove dead code - constructor variants that had no callers:
- new_with_readme()
- new_autonomous_with_readme()
- new_with_quiet()
These were thin wrappers around new_with_mode_and_readme() that were
never used externally. All 5 remaining constructors have verified callers.
Results:
- lib.rs reduced from 2817 to 2797 lines (-20 lines)
- Eliminated code-path aliasing: 8 constructors → 5 constructors
- All g3-core tests pass
- Full workspace compiles cleanly
Agent: fowler