Fix: Save LLM text response to context after tool execution

When the LLM executes a tool and then outputs text (e.g., analysis after
reading images), the text was being displayed during streaming but never
saved to the context window. This caused:

1. The response to appear truncated in the session log
2. Loss of context for subsequent turns
3. The LLM losing track of what it had already said

The fix saves current_response to the context window before breaking
out of the streaming loop for auto-continue after tool execution.

Reproduction scenario:
- User asks LLM to read images and analyze them
- LLM calls read_image tool
- Tool executes successfully
- LLM outputs analysis text ("Now I can see the results...")
- Text was displayed but lost from session log

Now the text is properly persisted to the context window.
This commit is contained in:
Dhanji R. Prasanna
2026-01-09 15:04:43 +11:00
parent 777191b3cb
commit c470964628

View File

@@ -2339,6 +2339,17 @@ impl<W: UiWriter> Agent<W> {
// break to let the outer loop's auto-continue logic handle it
if any_tool_executed {
debug!("Tools were executed, continuing - breaking to auto-continue");
// IMPORTANT: Save any text response to context window before breaking
// This ensures text displayed after tool execution is not lost
if !current_response.trim().is_empty() {
debug!("Saving current_response ({} chars) to context before auto-continue", current_response.len());
let assistant_msg = Message::new(
MessageRole::Assistant,
current_response.clone(),
);
self.context_window.add_message(assistant_msg);
}
// NOTE: We intentionally do NOT set full_response here.
// The content was already displayed during streaming.
// Setting full_response would cause duplication when the