Jochen
ad198a8501
add code exploration fast start
...
This tries to short-circuit multiple round-trips to llm for reading code.
It's a precursor to trying to context engineer tailored to specific tasks.
In initial experiments, it's only marginally faster than regular mode, and burns more tokens.
2025-11-25 22:51:32 +11:00
Jochen
a150ba6a55
adds ttl to cache control
2025-11-18 23:23:49 +11:00
Jochen
296bf5a449
adds cache_control
2025-11-18 22:38:52 +11:00
Jochen
010a43d203
coach/player provider split + add OpenAI
...
Allows coach and player LLM providers to be separately specified.
Also adds OpenAI provider
2025-10-21 16:59:13 +11:00
Dhanji Prasanna
260c949576
token counting fixes
2025-10-09 12:11:21 +11:00
Dhanji Prasanna
046b54c49b
move embedded provider to a better crate
2025-10-01 15:19:37 +10:00
Dhanji Prasanna
c490228824
databricks support
2025-09-27 17:28:02 +10:00
Dhanji Prasanna
9a5486f2a8
Fix for tool use
2025-09-20 20:17:50 +10:00
Dhanji Prasanna
444245d7dd
Readd the anthropic provider
2025-09-20 18:40:51 +10:00
Dhanji Prasanna
fa34755851
tool calling support for anthropic
2025-09-15 09:07:12 +10:00
Dhanji Prasanna
57626042a9
initial import
2025-09-15 09:07:09 +10:00