Conversation
WalkthroughAdds a system-prompt cache boundary marker, helpers to split/strip it, threads the marker into channel prompt rendering and templates, applies selective cache_control when building Anthropic system blocks, strips the marker for non‑Anthropic providers, updates tests, docs, and sorts MCP tool-name output. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
4ec775f to
f3163da
Compare
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
docs/content/docs/(core)/prompts.mdx (1)
113-142:⚠️ Potential issue | 🟡 MinorKeep memory context below the documented cache boundary.
The example still shows
memory_bulletinabove{{ system_prompt_cache_boundary }}, which implies it is part of the cached stable prefix. The actual channel template renders fallback memory context below the boundary, so this doc should move that block under the boundary and list it as volatile.📝 Suggested docs adjustment
-{%- if memory_bulletin %} -## Memory Context - -{{ memory_bulletin }} -{%- endif %} - [Base channel instructions...] @@ {%- if knowledge_synthesis %} ## Knowledge Context {{ knowledge_synthesis }} {%- endif %} + +{%- if memory_bulletin and not knowledge_synthesis %} +## Memory Context + +{{ memory_bulletin }} +{%- endif %} @@ - participant context - knowledge context +- memory context fallback - conversation contextAlso applies to: 170-183
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/content/docs/`(core)/prompts.mdx around lines 113 - 142, The doc incorrectly places the fallback memory block (memory_bulletin) above the cached stable prefix (system_prompt_cache_boundary); move the memory_bulletin block so it appears below {{ system_prompt_cache_boundary }} and classify it as volatile (i.e., describe it as non-cached/volatile context), and do the same relocation/description for the other occurrence of the memory_bulletin/working_memory blocks later in the file (the second instance that mirrors lines ~170-183).src/llm/anthropic/params.rs (1)
10-11: 🛠️ Refactor suggestion | 🟠 MajorMove the Claude Code preamble out of Rust.
CLAUDE_CODE_SYSTEM_PREAMBLEis sent as Anthropic system prompt text, so it should live inprompts/and be loaded through the prompt/text registry instead of being stored as a Rust string constant.As per coding guidelines, "Don't store prompts as string constants in Rust. System prompts live in
prompts/as markdown files. Load at startup or on demand."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/llm/anthropic/params.rs` around lines 10 - 11, Remove the CLAUDE_CODE_SYSTEM_PREAMBLE Rust constant and move its text into a markdown file under prompts/ (e.g., prompts/claude/code_system_preamble.md), then change all usages of CLAUDE_CODE_SYSTEM_PREAMBLE to load the prompt via the project's prompt/text registry API (load at startup or on demand via the registry lookup) so the system prompt is managed by the prompt registry instead of being a Rust string constant.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@docs/content/docs/`(core)/prompts.mdx:
- Around line 113-142: The doc incorrectly places the fallback memory block
(memory_bulletin) above the cached stable prefix (system_prompt_cache_boundary);
move the memory_bulletin block so it appears below {{
system_prompt_cache_boundary }} and classify it as volatile (i.e., describe it
as non-cached/volatile context), and do the same relocation/description for the
other occurrence of the memory_bulletin/working_memory blocks later in the file
(the second instance that mirrors lines ~170-183).
In `@src/llm/anthropic/params.rs`:
- Around line 10-11: Remove the CLAUDE_CODE_SYSTEM_PREAMBLE Rust constant and
move its text into a markdown file under prompts/ (e.g.,
prompts/claude/code_system_preamble.md), then change all usages of
CLAUDE_CODE_SYSTEM_PREAMBLE to load the prompt via the project's prompt/text
registry API (load at startup or on demand via the registry lookup) so the
system prompt is managed by the prompt registry instead of being a Rust string
constant.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: e63b23aa-a8f0-4ab2-a706-d1aff052545d
📒 Files selected for processing (7)
docs/content/docs/(core)/prompts.mdxprompts/en/channel.md.j2src/llm/anthropic/params.rssrc/llm/model.rssrc/mcp.rssrc/prompts.rssrc/prompts/engine.rs
f3163da to
720df56
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
src/prompts/engine.rs (1)
8-8: Marker token value — consider uniqueness guarantees.The HTML-comment marker is fine for the current markdown templates, but since
strip_system_prompt_cache_boundarydoes a plainreplaceandsplit_oncesplits on the first occurrence, any user-authored content that happens to contain this exact comment (e.g., pasted transcript inbackfill_transcriptorworking_memory) would collide. Unlikely in practice, but worth keeping in mind — a more obscure sentinel (e.g., including a UUID) would eliminate the risk entirely.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/prompts/engine.rs` at line 8, The current sentinel string SYSTEM_PROMPT_CACHE_BOUNDARY is too generic and could collide with user content; change its value to a far-more-obscure, globally-unique marker (e.g., append a stable UUID-like suffix) and update all places that reference it (for example strip_system_prompt_cache_boundary) to use the new constant; ensure any serialization, tests, or template files that embed the marker are updated to the new unique value so split_once/replace behavior cannot be triggered by user-supplied text.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/llm/anthropic/params.rs`:
- Around line 134-143: The Anthropic branch currently uses
crate::prompts::engine::split_system_prompt_cache_boundary which returns a
(stable_prefix, volatile_suffix) via split_once but leaves any additional
SYSTEM_PROMPT_CACHE_BOUNDARY markers inside volatile_suffix; update the logic in
the block that calls split_system_prompt_cache_boundary (and then
push_system_text_block) to ensure remaining markers are removed from the
volatile suffix—e.g., after receiving volatile_suffix call the same
strip/replace helper used elsewhere (strip_system_prompt_cache_boundary or
equivalent replace) before passing it to push_system_text_block—so Anthropic and
non-Anthropic paths handle multiple markers consistently (alternative: assert a
single-marker invariant if you prefer).
---
Nitpick comments:
In `@src/prompts/engine.rs`:
- Line 8: The current sentinel string SYSTEM_PROMPT_CACHE_BOUNDARY is too
generic and could collide with user content; change its value to a
far-more-obscure, globally-unique marker (e.g., append a stable UUID-like
suffix) and update all places that reference it (for example
strip_system_prompt_cache_boundary) to use the new constant; ensure any
serialization, tests, or template files that embed the marker are updated to the
new unique value so split_once/replace behavior cannot be triggered by
user-supplied text.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 1c278440-ba0d-44ab-a666-55dd80da3876
📒 Files selected for processing (7)
docs/content/docs/(core)/prompts.mdxprompts/en/channel.md.j2src/llm/anthropic/params.rssrc/llm/model.rssrc/mcp.rssrc/prompts.rssrc/prompts/engine.rs
✅ Files skipped from review due to trivial changes (4)
- prompts/en/channel.md.j2
- src/prompts.rs
- docs/content/docs/(core)/prompts.mdx
- src/llm/model.rs
| if let Some(preamble) = &request.preamble { | ||
| let mut preamble_block = serde_json::json!({ | ||
| "type": "text", | ||
| "text": preamble, | ||
| }); | ||
| if let Some(cc) = cache_control { | ||
| preamble_block["cache_control"] = cc.clone(); | ||
| if let Some((stable_prefix, volatile_suffix)) = | ||
| crate::prompts::engine::split_system_prompt_cache_boundary(preamble) | ||
| { | ||
| push_system_text_block(&mut system_blocks, stable_prefix, cache_control); | ||
| push_system_text_block(&mut system_blocks, volatile_suffix, &None); | ||
| } else { | ||
| push_system_text_block(&mut system_blocks, preamble, cache_control); | ||
| } | ||
| system_blocks.push(preamble_block); | ||
| } |
There was a problem hiding this comment.
Multiple boundary markers: inconsistent handling between providers.
split_system_prompt_cache_boundary uses split_once, so if the preamble contains more than one SYSTEM_PROMPT_CACHE_BOUNDARY marker, any subsequent markers remain verbatim inside the volatile suffix text block sent to Anthropic. Meanwhile the non-Anthropic path in src/llm/model.rs uses strip_system_prompt_cache_boundary (a replace), which removes all occurrences. Today there's only one marker in the channel template, so this is latent rather than exploitable, but if another template or fragment ever adds the marker Anthropic users would see the raw HTML comment leak into the system prompt while other providers would not.
Consider either splitting on first and stripping the marker from the remaining suffix, or asserting/documenting single-marker invariant.
🔧 Proposed tweak
if let Some((stable_prefix, volatile_suffix)) =
crate::prompts::engine::split_system_prompt_cache_boundary(preamble)
{
+ let volatile_suffix =
+ crate::prompts::strip_system_prompt_cache_boundary(volatile_suffix);
push_system_text_block(&mut system_blocks, stable_prefix, cache_control);
- push_system_text_block(&mut system_blocks, volatile_suffix, &None);
+ push_system_text_block(&mut system_blocks, &volatile_suffix, &None);
} else {
push_system_text_block(&mut system_blocks, preamble, cache_control);
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/llm/anthropic/params.rs` around lines 134 - 143, The Anthropic branch
currently uses crate::prompts::engine::split_system_prompt_cache_boundary which
returns a (stable_prefix, volatile_suffix) via split_once but leaves any
additional SYSTEM_PROMPT_CACHE_BOUNDARY markers inside volatile_suffix; update
the logic in the block that calls split_system_prompt_cache_boundary (and then
push_system_text_block) to ensure remaining markers are removed from the
volatile suffix—e.g., after receiving volatile_suffix call the same
strip/replace helper used elsewhere (strip_system_prompt_cache_boundary or
equivalent replace) before passing it to push_system_text_block—so Anthropic and
non-Anthropic paths handle multiple markers consistently (alternative: assert a
single-marker invariant if you prefer).
Summary
Test Plan
cargo fmt --allcargo test --lib get_tool_names_returns_deterministic_sorted_namescargo test --lib prompt_cachegit diff --checkjust preflightjust gate-prReview Finding Closure
McpManager::get_tool_namesby sorting rendered capability lines before prompt rendering. Verified withcargo test --lib get_tool_names_returns_deterministic_sorted_names.Note
Implements prompt cache boundary infrastructure for Anthropic API integration. Adds new
llm/anthropicmodule with auth path detection (API key vs OAuth vs proxy bearer), cache retention policies (None/Short/Long with TTL), and parameter building. Updates MCP manager to sort tool names deterministically for stable cache prefixes. System prompts now include explicit boundary markers that are preserved for Anthropic but stripped for other providers, enabling cache hits on stable channel/branch/worker content while keeping dynamic elements below the boundary. Includes comprehensive tests for cache control resolution, tool name ordering, and auth path detection.Written by Tembo for commit 4ec775f