Add prompt cache statistics tracking to /stats command
- Extend Usage struct with cache_creation_tokens and cache_read_tokens fields - Parse Anthropic cache_creation_input_tokens and cache_read_input_tokens - Parse OpenAI prompt_tokens_details.cached_tokens for automatic prefix caching - Add CacheStats struct to Agent for cumulative tracking across API calls - Add "Prompt Cache Statistics" section to /stats output showing: - API call count and cache hit count - Hit rate percentage - Total input tokens and cache read/creation tokens - Cache efficiency (% of input served from cache) - Update all provider implementations and test files
This commit is contained in:
@@ -763,6 +763,8 @@ impl LLMProvider for DatabricksProvider {
|
||||
prompt_tokens: databricks_response.usage.prompt_tokens,
|
||||
completion_tokens: databricks_response.usage.completion_tokens,
|
||||
total_tokens: databricks_response.usage.total_tokens,
|
||||
cache_creation_tokens: 0, // Databricks doesn't support prompt caching
|
||||
cache_read_tokens: 0,
|
||||
};
|
||||
|
||||
debug!(
|
||||
|
||||
Reference in New Issue
Block a user