Commit Graph

246 Commits

Author SHA1 Message Date
Dhanji R. Prasanna
02799a8e69 refactor(g3-core): extract streaming helpers and simplify cache control logic
Readability improvements to g3-core/src/lib.rs:

- Extract format_tool_arg_value() to streaming.rs for tool argument display
- Extract format_read_file_summary() to streaming.rs for file read summaries
- Add format_tool_output_summary() helper for consistent output formatting
- Add get_provider_cache_control() helper to eliminate duplicated cache lookup
- Simplify cache control logic in execute_single_task and stream_completion_with_tools
- Add unit tests for all new streaming helpers

Results:
- lib.rs: 2979 → 2945 lines (34 lines saved)
- streaming.rs: 305 → 379 lines (74 lines added as reusable, tested helpers)
- All 155+ tests pass

Agent: carmack
2026-01-12 07:21:40 +05:30
Dhanji R. Prasanna
f10374c925 Remove machine mode entirely from g3
- Delete machine_ui_writer.rs
- Remove --machine CLI flag from cli_args.rs
- Remove run_machine_mode(), run_interactive_machine(), run_autonomous_machine() functions
- Remove handle_machine_command() function
- Simplify OutputMode enum to just use SimpleOutput directly
- Simplify SimpleOutput struct (remove machine_mode field)
- Remove machine_mode parameter from setup_workspace_directory()
- Remove test_machine_option_accepted test
- Disable ACD by default in agent_mode (requires --acd flag)
- Change 'memory checkpoint' message formatting
- Remove dehydration status message
2026-01-12 06:01:31 +05:30
Dhanji R. Prasanna
f415dbb84b Fix ACD turn summary loss and add /dump command
ACD (Aggressive Context Dehydration) fixes:
- Fixed dehydrate_context() to extract turn summary from context window
  instead of using the passed-in final_response (which contained only
  the timing footer, not the actual LLM response)
- Removed final_response parameter from dehydrate_context() since it
  now self-extracts the last assistant message as the summary
- This ensures the actual turn summary is preserved after dehydration,
  not just the timing footer

New /dump command:
- Added /dump command to dump entire context window to tmp/ for debugging
- Shows message index, role, kind, content length, and full content
- Available in both console and machine modes

UTF-8 safety:
- Fixed truncate_to_word_boundary() to use character indices instead of
  byte indices, preventing panics on multi-byte UTF-8 characters
- Added UTF-8 string slicing guidance to AGENTS.md

Agent: g3
2026-01-12 05:13:02 +05:30
Dhanji R. Prasanna
bb25c7881a Change agent mode header text
From: 🤖 Running as agent: fowler
To: >> agent mode | fowler
2026-01-11 17:24:26 +05:30
Dhanji R. Prasanna
4962f439f3 Simplify agent mode working directory display
Change from: 📁 Working directory: "/Users/dhanji/src/g3"
To: -> ~/src/g3

Replaces home directory with ~ for cleaner output.
2026-01-11 17:20:26 +05:30
Dhanji R. Prasanna
f83ae7fd39 Add status line showing loaded context in agent mode
Shows checkmarks for README, AGENTS.md, and Memory if loaded,
or dots if not found. Displayed below the working directory line.
2026-01-11 17:13:32 +05:30
Dhanji R. Prasanna
9509e51708 style: simplify auto-memory checkpoint message 2026-01-11 16:51:09 +05:30
Dhanji R. Prasanna
280ae1fcbb feat: add --auto-memory flag to prompt LLM to save discoveries
Adds a new --auto-memory CLI flag that automatically sends a reminder
to the LLM after each turn where tools were called, prompting it to
call the remember tool if it discovered any key code locations.

Changes:
- Add auto_memory field and set_auto_memory() method to Agent
- Add tool_calls_this_turn tracking in execute_tool_in_dir()
- Add send_auto_memory_reminder() that sends reminder after tool use
- Add --auto-memory CLI flag and wire it up in console/machine modes
- Call send_auto_memory_reminder() in single-shot and interactive modes
- Add visible status messages for auto-memory actions

Fixes bug where tool calls were not being tracked when execute_tool_in_dir
was called directly with working_dir=None.
2026-01-11 08:00:51 +08:00
Dhanji R. Prasanna
33c1aba86e Show human-readable descriptions in /resume session list
- Add description field to SessionContinuation struct
- Extract first user message (truncated to ~60 chars at word boundary)
- Display as quoted text instead of session ID hash
- Fall back to session ID if no description available

Example: [2 hours ago] 'when I call /resume it only shows me 2 sessions...'
2026-01-11 06:22:20 +08:00
Dhanji R. Prasanna
8926775acb Add session continuation symlink fix and /resume command
Fix session detection:
- Add save_session_continuation() calls at all session exit points
- Sessions now properly create .g3/session symlink for resume detection
- Fixes issue where g3 wasn't offering to resume previous sessions

Add /resume command:
- New list_sessions_for_directory() to scan available sessions
- New switch_to_session() method to safely switch between sessions
- Shows numbered list with timestamps, context %, and TODO status
- Saves current session before switching (can be resumed later)
- Restores full context if <80% used, otherwise uses summary
- Machine mode supports /resume and /resume <number>

Documentation:
- Add /clear and /resume to CONTROL_COMMANDS.md
- Update /help output with new commands
2026-01-11 05:30:58 +08:00
Dhanji R. Prasanna
6be0a03c4c Fix timing footer being saved to context window
The timing footer (e.g., ⏱️ 19.4s | 💭 4.7s) was being saved to the
conversation history as a separate assistant message. This happened
because stream_completion_with_tools returns the timing footer in
TaskResult.response for display, but the caller was also saving it
to context.

Fix: Strip the timing footer (identified by \n\n⏱️) before saving
to context window. The timing footer remains display-only.

Also includes:
- Research tool blank line fix: only add visual separator for research
  tool output, not all tools
- Research tool webdriver propagation: pass parent's webdriver browser
  choice (Safari vs Chrome headless) to scout subprocess
2026-01-10 15:55:59 +11:00
Dhanji R. Prasanna
33e5705fc3 Add research tool for web-based research via scout agent
New tool that spawns a scout agent to perform web research and return
a structured research brief. The scout agent uses webdriver to browse
the web and returns a decision-ready report.

Changes:
- Added 'research' tool definition (12 core tools total)
- Added research tool dispatch in tool_dispatch.rs
- Created tools/research.rs implementation:
  - Spawns 'g3 --agent scout <query>' as subprocess
  - Captures stdout and extracts last line (report file path)
  - Reads and returns the report file contents
- Added exclude_research flag to ToolConfig
- Scout agent (agent_name == 'scout') does NOT have access to research
  tool to prevent infinite recursion
- Updated system prompts to describe when to use research tool
- Added scout.md agent prompt with research brief output contract

The research tool is preferred for complex research tasks (APIs, SDKs,
libraries, approaches, bugs). WebDriver can still be used directly for
simple lookups or fine-grained control.
2026-01-09 15:59:19 +11:00
Dhanji R. Prasanna
e301075666 Fix panic on multi-byte chars in filter_json buffer truncation
The buffer truncation code was slicing at a raw byte offset which could
land in the middle of a multi-byte character (like emojis), causing a
panic. Fixed by using char_indices() to find valid character boundaries.

Also added stop_reason field to CompletionChunk initializers in tests
to complete the stop_reason feature addition.

- Fix byte boundary panic in filter_json.rs line 327
- Add test for multi-byte character handling
- Update test files with missing stop_reason field
2026-01-09 15:20:57 +11:00
Dhanji R. Prasanna
c470964628 Fix: Save LLM text response to context after tool execution
When the LLM executes a tool and then outputs text (e.g., analysis after
reading images), the text was being displayed during streaming but never
saved to the context window. This caused:

1. The response to appear truncated in the session log
2. Loss of context for subsequent turns
3. The LLM losing track of what it had already said

The fix saves current_response to the context window before breaking
out of the streaming loop for auto-continue after tool execution.

Reproduction scenario:
- User asks LLM to read images and analyze them
- LLM calls read_image tool
- Tool executes successfully
- LLM outputs analysis text ("Now I can see the results...")
- Text was displayed but lost from session log

Now the text is properly persisted to the context window.
2026-01-09 15:04:43 +11:00
Dhanji R. Prasanna
777191b3cb Remove final_output tool - let summaries stream naturally
- Remove final_output from tool definitions, dispatch, and misc tools
- Update system prompts to request summaries as regular markdown text
- Remove print_final_output from UiWriter trait and all implementations
- Remove final_output handling from agent core logic
- Rename final_output_summary → summary in session continuation
- Delete final_output test files
- Update tool count tests (12→11, 27→26)

This allows LLM summaries to stream through the markdown formatter
for a more natural, responsive user experience instead of buffering
everything into a tool call.
2026-01-09 14:57:24 +11:00
Dhanji R. Prasanna
67be0f20c7 fix: remove allow_multiple_tool_calls config and simplify tool execution flow
This fixes a bug where the agent would stop responding abruptly without
calling final_output. The root cause was the allow_multiple_tool_calls
config option (default: false) which caused the agent to break out of
the streaming loop mid-stream after executing the first tool, losing
any subsequent content.

Changes:
- Remove allow_multiple_tool_calls config option entirely
- Always process all tool calls without breaking mid-stream
- Simplify system prompt generation (no longer needs boolean param)
- Let the stream complete fully before continuing to next iteration
- Change find_last_tool_call_start to find_first_tool_call_start
- Remove parser.reset() call on duplicate detection

Benefits:
- Simpler logic with less conditional branching
- No lost content after tool calls
- Consistent behavior for all users
- Reduced config complexity
2026-01-09 13:28:07 +11:00
Dhanji R. Prasanna
347513b04c Add comprehensive stress tests for streaming markdown formatter
Add 10 stress tests covering:
- Nested formatting (bold in italic, italic in bold)
- Empty/minimal content edge cases
- Escape sequences and special characters
- Lists with complex inline formatting
- Links with various content types
- Tables with formatting in cells
- Code blocks (should not format contents)
- Mixed block elements (headers, quotes, rules)
- Nested lists (3+ levels, mixed types)
- Pathological/adversarial inputs (unbalanced delimiters, unicode, long lines)

All 45 tests pass.
2026-01-08 20:27:28 +11:00
Dhanji R. Prasanna
381b852869 refactor(g3-core): Extract streaming utilities into dedicated module
Extract reusable utilities from the massive stream_completion_with_tools
function into a new streaming.rs module for improved readability:

- format_duration, format_timing_footer: timing display helpers
- clean_llm_tokens: consolidates 4 duplicate token-cleaning call sites
- log_stream_error: extracts 70+ lines of error logging
- is_empty_response, is_connection_error: predicate helpers
- truncate_for_display, truncate_line: string truncation utilities
- StreamingState, IterationState: state structs for future refactoring

Results:
- lib.rs reduced from 2978 to 2840 lines (138 lines, ~5%)
- New streaming.rs: 309 lines with 5 unit tests
- All 98+ tests pass

Agent: carmack
2026-01-08 13:20:11 +11:00
Dhanji R. Prasanna
5bfaee8dd5 use consistent naming for compaction 2026-01-08 12:54:03 +11:00
Dhanji R. Prasanna
48036d01e3 fix(g3-core): disable auto-continue in interactive mode
Auto-continue was incorrectly triggering when the LLM asked questions
in interactive/chat mode. Now auto-continue only activates when
is_autonomous is true, allowing proper back-and-forth conversation
in interactive mode.

Agent: fowler
2026-01-07 10:37:30 +11:00
Dhanji R. Prasanna
b73dfacb7a refactor(g3-core): extract provider_registration and session modules
Extract two focused modules from the monolithic lib.rs (3372 lines):

1. provider_registration.rs (233 lines)
   - Consolidates duplicated provider registration patterns
   - Single determine_providers_to_register() function for mode-based selection
   - Unified register_providers() async function for all provider types
   - Includes unit tests for registration logic

2. session.rs (394 lines)
   - Session ID generation (generate_session_id)
   - Context window persistence (save_context_window, write_context_window_summary)
   - Error logging (log_error_to_session)
   - Utility functions (format_token_count, token_indicator)
   - Session restoration helper (restore_from_session_log)
   - Includes comprehensive unit tests

Also fixes:
- Removed redundant tool_executed assignment that triggered unused warning
- Removed unused Message import in session.rs

Results:
- lib.rs reduced from 3372 to 2976 lines (-396 lines, -11.7%)
- All tests pass, no warnings
- Behavior preserved (pure mechanical extraction)

Agent: fowler
2026-01-07 10:20:28 +11:00
Dhanji R. Prasanna
e2445a5d22 refactor(g3-core): extract duplicate detection helper and consolidate thinning
- Extract check_duplicate_in_previous_message() helper to reduce nesting
  from 6+ levels to 2 levels in stream_completion_with_tools
- Create do_thin_context() and do_thin_context_all() helpers to centralize
  context thinning with event tracking
- Use provider_config::parse_provider_ref() in additional call sites
- All 295 tests pass

This continues the refactoring to eliminate code-path aliasing and
reduce cyclomatic complexity in the Agent implementation.
2026-01-07 08:45:51 +11:00
Dhanji R. Prasanna
386176899e Remove vision tools (except take_screenshot) and macax tools
Vision tools removed:
- extract_text (OCR from image files)
- extract_text_with_boxes (OCR with bounding boxes)
- vision_find_text (find text in app windows)
- vision_click_text (find and click on text)
- vision_click_near_text (click near text labels)

macax tools removed:
- macax_list_apps
- macax_get_frontmost_app
- macax_activate_app
- macax_press_key
- macax_type_text

The LLM can now read images directly via read_image tool.
take_screenshot is retained for capturing application windows.

Files deleted:
- crates/g3-core/src/tools/vision.rs
- crates/g3-core/src/tools/macax.rs
- docs/macax-tools.md

Updated tool counts: 12 core + 15 webdriver = 27 total
2026-01-03 17:38:25 +11:00
Dhanji R. Prasanna
f4a1bf5e93 fix agent-mode session resumption bug 2026-01-03 16:44:58 +11:00
Dhanji R. Prasanna
76bfb77f84 further fowler fixes and session fixes 2026-01-03 15:47:04 +11:00
Dhanji R. Prasanna
65867e7f96 refactor tools out of lib.rs 2026-01-03 15:06:34 +11:00
Dhanji R. Prasanna
595ad6ad21 agent mode resumption 2026-01-03 14:50:08 +11:00
Dhanji R. Prasanna
016efc1db6 Prevent agent mode from stopping after first TODO phase
- Add TODO completion check to final_output tool in autonomous mode only
- When incomplete TODO items exist, reject final_output and prompt LLM to continue
- Non-autonomous modes (interactive, chat) are unaffected
- Add 6 tests verifying behavior in both autonomous and non-autonomous modes

Fixes issue where LLM would call final_output after completing first phase,
causing agent to stop prematurely instead of continuing with remaining phases.
2025-12-27 12:35:31 +11:00
Dhanji R. Prasanna
4c25e43ee4 refactoring 2025-12-26 15:16:12 +11:00
Dhanji R. Prasanna
666be4ff40 Fix duplicate tool call handling: move tool_executed flag and reset parser
- Move tool_executed = true after duplicate check to prevent auto-continue
  from triggering when only duplicate tools were detected
- Reset parser state when duplicate detected to clear any partial/polluted
  state from LLM stuttering or example tool calls in markdown blocks
2025-12-26 11:55:57 +11:00
Dhanji R. Prasanna
46611d9e13 Improve read_image output formatting
- Add newline after └─ before first image preview
- Show only filename (not full path) in info line
2025-12-26 11:36:10 +11:00
Dhanji R. Prasanna
2a4dad2842 Update read_image output with box drawing characters
- Print └─ before images to break out of tool output box
- Print ┌─ after images to resume tool output box
- Remove │ prefix from image preview and info lines
- Info line uses single space prefix, dimmed text
- Only include error messages in tool result (success info printed via imgcat)
2025-12-26 11:29:33 +11:00
Dhanji R. Prasanna
e688d3b29f Simplify read_image imgcat output formatting
- Remove │ prefix before image preview, use single space instead
- Keep info line on its own line with │ prefix
- Keep blank line spacing between images
2025-12-26 11:24:13 +11:00
Dhanji R. Prasanna
3601cc0547 Enhance read_image tool with magic byte detection and multi-image support
- Fix media type detection using magic bytes instead of file extension
  - Correctly identifies JPEG files with .png extension (and vice versa)
  - Supports PNG, JPEG, GIF, and WebP formats

- Add multi-image support with file_paths array parameter
  - Load multiple images in a single tool call
  - All images queued for LLM analysis

- Enhanced CLI output:
  - Inline image preview via iTerm2 imgcat protocol (height=5)
  - Dimmed info line showing: path | dimensions | media type | file size
  - Proper │ prefix alignment with tool output boxing
  - Human-readable file sizes (bytes, KB, MB)

- Add image dimension extraction from file headers
  - PNG, JPEG, GIF, WebP dimension parsing

- Add comprehensive tests for magic byte detection and dimensions
2025-12-26 11:19:37 +11:00
Dhanji R. Prasanna
3ece02ff31 fix: resolve compiler warnings across crates
- Remove unused assignment to final_output_called (returns immediately after)
- Mark cache_config field as #[allow(dead_code)] (reserved for future use)
- Mark print_status_line method as #[allow(dead_code)] (reserved for future use)
2025-12-25 18:47:22 +11:00
Dhanji R. Prasanna
258f9878ff style: use ◉ symbol for token count in timing footer
Changes '227tk | 48% ctx' to '227 ◉ | 48%' for a cleaner look.
2025-12-25 18:40:17 +11:00
Dhanji R. Prasanna
d09c80180e fix: remove redundant TODO list header that breaks boxing effect 2025-12-25 18:34:51 +11:00
Dhanji R. Prasanna
64f27c0abc feat: move TODO lists to session-scoped directories
TODO lists are now stored in .g3/sessions/<session_id>/todo.g3.md instead
of the workspace root. This prevents different g3 sessions from accidentally
picking up or overwriting each other's TODOs.

Changes:
- Add get_session_todo_path() function in paths.rs
- Update todo_read/todo_write handlers to use session-specific paths
- Remove TODO loading at Agent initialization (sessions start fresh)
- Update prompts to reflect session-scoped behavior

Fallback behavior preserved for planner mode (G3_TODO_PATH env var).
2025-12-25 18:33:03 +11:00
Dhanji R. Prasanna
d9c58576a1 feat: add background_process tool for launching long-running processes
Adds a new tool that allows launching processes (like game servers) in the
background while g3 continues to operate. The process runs independently
with stdout/stderr captured to a log file.

Features:
- Named process tracking for easy reference
- Automatic log capture to logs/background_processes/
- Returns PID and log file path for use with shell tool
- Automatic cleanup on agent shutdown via Drop trait

Usage: Use shell tool to interact with the process:
- Read logs: tail -100 <logfile>
- Check status: ps -p <pid>
- Stop process: kill <pid>

Files:
- New: crates/g3-core/src/background_process.rs
- New: crates/g3-core/tests/background_process_demo_test.rs
- Modified: crates/g3-core/src/lib.rs (tool definition + handler)
- Modified: crates/g3-core/src/prompts.rs (documentation)
2025-12-25 18:23:10 +11:00
Dhanji R. Prasanna
9ff5ba6098 Fix auto-continue false positives from tool-call-like content
When the LLM outputs text containing tool call patterns (e.g., reading
log files, showing examples, or discussing tool calls), the parser's
has_unexecuted_tool_call() would detect these as real tool calls and
trigger auto-continue, leading to repeated empty responses.

The fix: mark the parser buffer as consumed when content is displayed.
This prevents tool-call-like patterns in displayed text from triggering
false positives later. The fix is safe because:

1. Only runs when no tool was detected (inside 'if !tool_executed')
2. Legitimate tool calls are detected first by process_chunk()
3. Matches existing pattern of calling mark_tool_calls_consumed()
   after tool execution
2025-12-25 17:55:13 +11:00
Dhanji R. Prasanna
f9d0c33461 Revert "Fix auto-continue bug: ensure assistant message before continue prompt"
This reverts commit fe96969adb.
2025-12-24 15:52:23 +11:00
Dhanji R. Prasanna
fe96969adb Fix auto-continue bug: ensure assistant message before continue prompt
The auto-continue logic was adding User continue prompts without first
adding an Assistant message when the LLM returned an empty response.
This caused consecutive User messages in the conversation history,
which confused the LLM and caused it to return more empty responses.

The fix ensures an Assistant message is always added before the continue
prompt, using '[empty response]' as a placeholder when the LLM returned
nothing substantive. This maintains proper User/Assistant alternation.
2025-12-24 15:50:30 +11:00
Dhanji R. Prasanna
cd64ebbf87 Add tokens consumed and context percentage to per-tool timing footer
The per-tool timing line now shows:
- Tokens delta (tokens added to context by this tool call)
- Context window usage percentage

Example: └─ ️ 1ms  523tk | 49% ctx

Changes:
- Updated UiWriter trait print_tool_timing signature
- Track tokens before/after adding tool messages to calculate delta
- Updated ConsoleUiWriter, MachineUiWriter, PlannerUiWriter, and test mocks
2025-12-24 15:44:19 +11:00
Dhanji R. Prasanna
fd22ce9890 refactor(g3-core): extract 4 modules from monolithic lib.rs
Reduce lib.rs from 7481 to 6557 lines (-12.4%) by extracting:

- paths.rs: Session/workspace path utilities (get_todo_path, get_logs_dir, etc.)
- streaming_parser.rs: StreamingToolParser for LLM response parsing
- utils.rs: Diff parsing and shell escaping utilities
- webdriver_session.rs: Unified Safari/Chrome WebDriver abstraction

All public APIs preserved via re-exports for backward compatibility.
Added 13 new unit tests across extracted modules.
All 225 tests pass.
2025-12-24 14:32:39 +11:00
Dhanji R. Prasanna
382b905441 duplicate output fix 2025-12-23 17:20:23 +11:00
Dhanji R. Prasanna
743d622468 Add token usage and context % to timing footer
Added a quality-of-life feature that displays:
- Tokens used in the current turn (from LLM response, not estimated)
- Current context window usage percentage

These are displayed dimmed after the timing info:
  ⏱️ 1.2s | 💭 0.3s  1234tk | 45% ctx

The token count comes directly from the LLM's usage response data,
not from any estimation. If no usage data is available from the LLM,
only the context percentage is shown.
2025-12-22 17:22:54 +11:00
Dhanji R. Prasanna
10e2fe9b94 Add tests for duplicate detection logic
Added 13 tests to verify that duplicate detection only catches
IMMEDIATELY SEQUENTIAL duplicates:

- test_find_complete_json_object_end_* - Tests for JSON parsing helper
- test_same_tool_with_text_between_not_duplicate - Key test ensuring
  tool calls separated by text are NOT duplicates
- test_different_tools_back_to_back_not_duplicate
- test_same_tool_different_args_not_duplicate
- test_identical_tool_calls_back_to_back_are_duplicates
- test_has_text_after_tool_call - Tests text detection logic
- test_tool_call_with_newlines_between
- test_tool_call_with_whitespace_text_between
- test_tool_call_in_middle_of_text
- test_multiple_different_tool_calls_with_text

Also made find_complete_json_object_end public for testing.
2025-12-22 17:11:05 +11:00
Dhanji R. Prasanna
c7204c6699 Fix tool call detection and duplicate handling issues
1. Set tool_executed=true when a tool call is detected, even if skipped
   as a duplicate. This prevents the raw JSON from being printed to screen
   when a tool call is detected but not executed.

2. Remove session-level duplicate detection entirely. All tools should be
   allowed to be called multiple times in a session.

3. Fix sequential duplicate detection to only catch IMMEDIATELY sequential
   duplicates:

   - DUP IN CHUNK: Now only checks if the PREVIOUS tool call in the chunk
     is the same (not any tool call in the chunk)

   - DUP IN MSG: Now only checks if the LAST tool call in the previous
     message matches AND there's no text after it. If there's any
     non-whitespace text between tool calls, they're not considered
     duplicates.

This allows legitimate re-use of tools while still catching cases where
the LLM stutters and outputs the same tool call twice in a row.
2025-12-22 17:03:07 +11:00
Dhanji R. Prasanna
da91459e09 Fix auto-continue bug: don't return early when tools executed but final_output not called
The bug was in the chunk.finished block inside stream_completion_with_tools.
When no tool was executed in the CURRENT iteration (!tool_executed), the code
would return early without checking if tools were executed in PREVIOUS iterations
(any_tool_executed) and final_output was never called.

This caused the agent to terminate prematurely after executing tools like
todo_read when the LLM responded with text instead of calling final_output.

The fix adds a check: if any_tool_executed && !final_output_called, we break
to let the outer loop's auto-continue logic prompt the LLM to continue.

Also fixed missing debug! import in g3-console/src/main.rs.
2025-12-22 16:45:17 +11:00
Dhanji R. Prasanna
923def0ab2 Convert all INFO logs to DEBUG to reduce CLI noise
Converted ~77 info! macro calls to debug! across the codebase to prevent
log messages from interrupting the CLI experience during normal operation.
Users can still see these logs by setting RUST_LOG=debug if needed.

Affected crates:
- g3-cli
- g3-computer-control
- g3-console
- g3-core
- g3-ensembles
- g3-execution
- g3-providers
2025-12-22 16:27:35 +11:00