Add prompt cache statistics tracking to /stats command

- Extend Usage struct with cache_creation_tokens and cache_read_tokens fields
- Parse Anthropic cache_creation_input_tokens and cache_read_input_tokens
- Parse OpenAI prompt_tokens_details.cached_tokens for automatic prefix caching
- Add CacheStats struct to Agent for cumulative tracking across API calls
- Add "Prompt Cache Statistics" section to /stats output showing:
  - API call count and cache hit count
  - Hit rate percentage
  - Total input tokens and cache read/creation tokens
  - Cache efficiency (% of input served from cache)
- Update all provider implementations and test files
This commit is contained in:
Dhanji R. Prasanna
2026-01-27 11:32:45 +11:00
parent 96899230a4
commit 5b4079e861
13 changed files with 214 additions and 2 deletions

View File

@@ -763,6 +763,8 @@ impl LLMProvider for DatabricksProvider {
prompt_tokens: databricks_response.usage.prompt_tokens,
completion_tokens: databricks_response.usage.completion_tokens,
total_tokens: databricks_response.usage.total_tokens,
cache_creation_tokens: 0, // Databricks doesn't support prompt caching
cache_read_tokens: 0,
};
debug!(