Add characterization tests for the streaming parser stuttering bug fix (fa3c920).
These tests verify that when an LLM "stutters" and emits incomplete tool call
fragments followed by complete tool calls, the parser:
1. Does not get stuck waiting for the incomplete fragment to complete
2. Successfully parses complete tool calls that appear after the fragment
Tests cover:
- The exact pattern from butler session butler_c6ab59af2e4f991c
- Edge cases that should NOT trigger invalidation (nested JSON, patterns in strings)
- Recovery behavior after reset
- Multiple complete tool calls
- Boundary conditions (chunk boundaries, minimal patterns)
Agent: hopper
Fixes issues in the last 11 commits:
1. pending_research.rs: Fix flaky test_generate_id_uniqueness
- Replaced random u16 suffix with atomic counter for guaranteed uniqueness
- The timestamp+random approach could collide when generating IDs rapidly
- Now uses static AtomicU32 counter that increments monotonically
2. embedded/adapters/glm.rs: Remove unused in_code_fence field
- Field was written but never read (dead code)
- Removed from struct definition, constructor, and reset()
3. embedded/adapters/glm.rs: Fix orphaned tests
- Two tests (test_strip_code_fences, test_code_fenced_tool_call) were
outside the #[cfg(test)] mod tests block
- Moved closing brace to include them in the test module
All 446 library tests pass.
Agent: fowler
Allow users to view research reports directly from the CLI:
- /research - List all research tasks (unchanged)
- /research <id> - View the full report for a specific research task
- /research latest - View the most recent completed research report
Report display includes query, status, elapsed time, and full content.
When the LLM 'stutters' and emits incomplete tool call fragments like:
{"tool": "shell", "args": {...}}
{"tool":
{"tool": "shell", "args": {...}}
The parser would get stuck waiting for the incomplete fragment to complete,
causing the entire response to be lost (no tool executed, no text displayed).
This was observed in butler session butler_c6ab59af2e4f991c where the user's
'send!' command produced no response.
Fix: Enhanced is_json_invalidated() to detect when a new tool call pattern
({"tool"}) appears after a newline while parsing an incomplete JSON fragment.
This indicates the previous fragment was abandoned and should be invalidated.
Safety:
- Tool patterns inside JSON strings (e.g., writing example code) are not
affected because the check only runs outside strings
- Added tests for the stuttering pattern and the file-writing edge case
When background research completes, g3 now immediately prints a status
message instead of waiting for the next user interaction:
- Added ResearchCompletionNotification and broadcast channel to
PendingResearchManager for push-based notifications
- Added spawn_research_notification_handler() in interactive mode that
listens for completions in a background task
- When idle (at prompt): clears line, prints status, reprints prompt
- When busy (processing): prints status inline (interleaving is fine)
- Added G3Status::research_complete() for consistent formatting
- Added enable_research_notifications() method to Agent
Output format: "g3: 1 research report ... [done]"
Fixes three bugs in the input formatter introduced in 4e16942:
1. Bug 2 & 3 (missing newline, line duplication):
- Changed print! to println! to add trailing newline
- Calculate visual lines based on terminal width instead of
logical line count, fixing duplication for wrapped lines
2. Bug 1 (^M on non-interactive prompts):
- Added TTY check to skip formatting when stdout is not a terminal
- Prevents terminal state corruption for stdin prompts
The research tool now spawns the scout agent in a background tokio task
and returns immediately with a research_id placeholder. This allows the
agent to continue working while research runs (30-120 seconds).
Key changes:
- New PendingResearchManager for tracking async research tasks
- research tool returns immediately with placeholder containing research_id
- research_status tool to check progress of pending research
- Auto-injection of completed research at natural break points:
- Start of each tool iteration (before LLM call)
- Before prompting user in interactive mode
- /research CLI command to list all research tasks
- Updated system prompt to explain async behavior
The agent can:
- Continue with other work while research runs
- Check status with research_status tool
- Yield turn to user if results are critical before continuing
When users type prompts in interactive mode, the input is now
reformatted in place with enhanced highlighting:
- ALL CAPS words (2+ chars) become bold green (e.g., FIX, BUG, HTTP2)
- Quoted text ("..." or ...) becomes cyan
- Standard markdown formatting is also supported
New module: input_formatter.rs with 10 unit tests
Integrated into interactive.rs for both single-line and multiline input
- Add CommonFlags struct to group flags that apply across all modes
- Refactor run_agent_mode() to accept CommonFlags instead of individual params
- Add project loading logic for agent chat mode
- Add integration tests for --project with agent mode
This refactor prevents future bugs where new flags work in one mode
but are forgotten in another.
GLM-4 models wrap tool calls in markdown code fences and inline backticks,
which prevents the streaming parser from detecting them. This adapter:
- Strips ```json and ``` code fence markers during streaming
- Strips inline backticks from tool call JSON
- Handles chunked streaming correctly (buffers potential fence lines)
- Transforms GLM native format (<|assistant|>tool_name) to g3 JSON format
Also refactors embedded provider into module structure:
- embedded/mod.rs - module exports
- embedded/provider.rs - main EmbeddedProvider (moved from embedded.rs)
- embedded/adapters/mod.rs - ToolFormatAdapter trait
- embedded/adapters/glm.rs - GLM-specific adapter
Includes 22 unit tests covering edge cases like nested JSON in strings,
chunk boundary handling, and false pattern detection.
Updates README to show GLM-4 9B now works (⭐⭐) for agentic tasks.
embedded.rs (937→789 lines, -16%):
- Extract duplicated inference setup into prepare_context() helper
- Extract stop sequence handling into find_stop_sequence() and truncate_at_stop_sequence()
- Add InferenceParams struct to consolidate request parameter extraction
- Add clear section markers for code organization
- Tests now use module-level format functions directly (no duplication)
gemini.rs:
- Extract common request building into build_request() method
- Reduces duplication between complete() and stream() methods
All 399 unit tests pass. Behavior unchanged.
Agent: carmack
Agent: hopper
Added two new integration test files:
1. cache_stats_integration_test.rs (g3-core)
- Tests CacheStats accumulation through streaming completion flow
- Verifies cache hit detection (cache_read_tokens > 0)
- Tests multi-request accumulation of cache statistics
- Verifies cache efficiency and hit rate calculations
- Uses MockProvider to simulate provider usage data
2. gemini_serialization_test.rs (g3-providers)
- Tests Gemini API message format conversion
- Verifies system messages become system_instruction
- Verifies assistant role maps to "model" (Gemini terminology)
- Tests tool conversion to function_declarations format
- Characterizes multi-system-message behavior (last wins)
Both test files follow blackbox/integration testing principles:
- Test observable behavior through stable surfaces
- Do not assert internal implementation details
- Include documentation of what is/is not asserted
Named after David Huffman, inventor of Huffman coding -
compression that preserves information with fewer bits.
Fits the agent's purpose: compact memory, preserve semantics.
README.md is no longer auto-loaded into the LLM context at startup.
This saves ~4,600 tokens per session while AGENTS.md and memory.md
still provide all critical information for code tasks.
Changes:
- Delete read_project_readme() function
- Remove readme_content parameter from combine_project_content()
- Rename extract_readme_heading() -> extract_project_heading()
- Rename Agent constructors: *_with_readme_* -> *_with_project_context_*
- Update context preservation to only check for Agent Configuration
- Remove has_readme field from LoadedContent
- Update all tests to use new markers and function names
The LLM can still read README.md on-demand via read_file when needed.
Adds a new --project <PATH> flag that loads project files (brief.md,
contacts.yaml, status.md) at startup, similar to the /project command
but WITHOUT auto-executing the project status prompt.
Changes:
- Add --project flag to cli_args.rs
- Add load_and_validate_project() helper in project.rs (shared by both
--project flag and /project command)
- Modify run_interactive() to accept optional initial_project parameter
- Wire up --project in lib.rs to load project before interactive mode
- Refactor /project command to use shared helper (reduces duplication)
- Add 4 new tests for load_and_validate_project()
Added a new section documenting local LLM performance on complex agentic
tasks (comic book repacking test case). Includes:
- Cloud model baseline (Claude Opus 4.5, Sonnet 4.5, Claude 4 family)
- Local model ratings (Qwen3-32B, Qwen3-14B, GLM-4 9B, Qwen3-4B)
- Key findings about MoE vs dense models
- Configuration example for embedded providers
- Add GeminiProvider with streaming and native tool calling
- Support gemini-2.5-pro, gemini-2.0-flash, gemini-1.5-pro/flash models
- Model-specific context window detection (1M-2M tokens)
- Message conversion: assistant -> model role mapping
- System messages extracted to system_instruction field
- Tool schema conversion with functionCall/functionResponse parts
- SSE streaming with JSON array buffer parsing
- 8 unit tests for conversion and parsing logic
- Register provider in g3-core and validate in g3-cli
The resolve_max_tokens() function was returning 2048 for embedded providers,
which caused responses to be truncated prematurely. Increased to 8192 to
allow the provider's own effective_max_tokens() calculation to work properly.
- Add context_window_size() method to LLMProvider trait
- Implement for EmbeddedProvider to return the auto-detected context length
- Update Agent to query provider directly instead of using hardcoded defaults
- Removes need for model-specific context length mappings
- Use global OnceLock for llama.cpp backend to prevent BackendAlreadyInitialized error
- Suppress verbose llama.cpp stderr logging during model loading
- Fix provider validation to accept "embedded.name" format (extract type before dot)
Eliminate code-path aliasing in Agent construction methods by introducing
a single `build_agent()` helper that all constructors delegate to.
Before: 3 nearly-identical `Ok(Self { ... })` blocks (~30 lines each)
with subtle differences in auto_compact, is_autonomous, quiet, and
computer_controller fields - prone to drift over time.
After: Single canonical `build_agent()` method that constructs Agent
with all fields. All public constructors delegate to this single path:
- new_for_test() -> new_for_test_with_readme() -> build_agent()
- new_with_mode_and_readme() -> build_agent()
Changes:
- Add `build_agent()` private helper method (single source of truth)
- Simplify `new_for_test()` to delegate to `new_for_test_with_readme()`
- Update `new_for_test_with_readme()` to use `build_agent()`
- Update `new_with_mode_and_readme()` to use `build_agent()`
Net reduction: ~43 lines (-109/+66)
All 190 tests pass.
Agent: fowler
- Extend Usage struct with cache_creation_tokens and cache_read_tokens fields
- Parse Anthropic cache_creation_input_tokens and cache_read_input_tokens
- Parse OpenAI prompt_tokens_details.cached_tokens for automatic prefix caching
- Add CacheStats struct to Agent for cumulative tracking across API calls
- Add "Prompt Cache Statistics" section to /stats output showing:
- API call count and cache hit count
- Hit rate percentage
- Total input tokens and cache read/creation tokens
- Cache efficiency (% of input served from cache)
- Update all provider implementations and test files