use consistent naming for compaction
This commit is contained in:
@@ -11,7 +11,7 @@ Control commands are special commands you can use during an interactive G3 sessi
|
||||
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| `/compact` | Manually trigger conversation summarization |
|
||||
| `/compact` | Manually trigger conversation compaction |
|
||||
| `/thinnify` | Replace large tool results with file references (first third) |
|
||||
| `/skinnify` | Full context thinning (entire context window) |
|
||||
| `/readme` | Reload README.md and AGENTS.md from disk |
|
||||
@@ -22,7 +22,7 @@ Control commands are special commands you can use during an interactive G3 sessi
|
||||
|
||||
## /compact
|
||||
|
||||
Manually trigger conversation summarization to reduce context size.
|
||||
Manually trigger conversation compaction to reduce context size.
|
||||
|
||||
**When to use**:
|
||||
- Context usage is getting high (70%+)
|
||||
@@ -30,7 +30,7 @@ Manually trigger conversation summarization to reduce context size.
|
||||
- Conversation has accumulated irrelevant history
|
||||
|
||||
**What it does**:
|
||||
1. Sends conversation history to LLM for summarization
|
||||
1. Sends conversation history to LLM for compaction
|
||||
2. Replaces detailed history with concise summary
|
||||
3. Preserves key decisions and context
|
||||
4. Significantly reduces token usage
|
||||
@@ -144,7 +144,7 @@ Show detailed context and performance statistics.
|
||||
- Session duration
|
||||
- Token usage breakdown
|
||||
- Tool call metrics
|
||||
- Thinning and summarization events
|
||||
- Thinning and compaction events
|
||||
- First-token latency statistics
|
||||
|
||||
**Example**:
|
||||
@@ -198,7 +198,7 @@ When context gets high:
|
||||
1. **50-70%**: Consider `/thinnify`
|
||||
2. **70-80%**: Use `/compact`
|
||||
3. **80-90%**: Use `/skinnify` then `/compact`
|
||||
4. **90%+**: Auto-summarization triggers
|
||||
4. **90%+**: Auto-compaction triggers
|
||||
|
||||
### Best Practices
|
||||
|
||||
@@ -218,7 +218,7 @@ G3 performs automatic context management:
|
||||
| 50% | Thin oldest third of context |
|
||||
| 60% | Thin oldest third of context |
|
||||
| 70% | Thin oldest third of context |
|
||||
| 80% | Auto-summarization (if `auto_compact = true`) |
|
||||
| 80% | Auto-compaction (if `auto_compact = true`) |
|
||||
| 90% | Aggressive thinning before tool calls |
|
||||
|
||||
Manual commands give you finer control over when and how this happens.
|
||||
|
||||
@@ -289,7 +289,7 @@ The `ContextWindow` struct manages conversation history with intelligent token t
|
||||
|
||||
1. **Token Tracking**: Monitors usage as percentage of provider's context limit
|
||||
2. **Context Thinning**: At 50%, 60%, 70%, 80% thresholds, replaces large tool results with file references
|
||||
3. **Auto-Summarization**: At 80% capacity, triggers conversation summarization
|
||||
3. **Auto-Compaction**: At 80% capacity, triggers conversation compaction
|
||||
4. **Provider Adaptation**: Adjusts to different model context windows (4k to 200k+ tokens)
|
||||
|
||||
## Error Handling
|
||||
|
||||
@@ -376,5 +376,5 @@ For Databricks OAuth:
|
||||
|
||||
If you see context overflow errors:
|
||||
1. Check `max_context_length` in `[agent]`
|
||||
2. Use `/compact` command to manually summarize
|
||||
2. Use `/compact` command to manually compact
|
||||
3. Use `/thinnify` to replace large tool results with file references
|
||||
|
||||
@@ -386,7 +386,7 @@ To reduce rate limit issues:
|
||||
### Context Window Errors
|
||||
|
||||
If you see "context too long" errors:
|
||||
1. Use `/compact` to summarize conversation
|
||||
1. Use `/compact` to compact conversation
|
||||
2. Use `/thinnify` to replace large tool results
|
||||
3. Increase `max_context_length` in config
|
||||
4. Switch to a provider with larger context
|
||||
|
||||
Reference in New Issue
Block a user