Skip to content

Conversation

lfr-0531
Copy link
Collaborator

@lfr-0531 lfr-0531 commented Aug 29, 2025

Summary by CodeRabbit

  • New Features
    • Added a postprocessing hook to the CUDA graph capture pipeline to restore inputs after forward execution; applied to both standard and MoE paths when speculative decoding is enabled.
  • Bug Fixes
    • Prevents input mutations from accumulating across CUDA graph captures, improving correctness and stability with speculative decoding and overlap scheduling.
  • Refactor
    • Updated capture flow to separate forward execution and postprocessing, ensuring inputs remain unchanged across captures.
  • Chores
    • API adjustment: capture now requires a postprocessing callback alongside the forward function.

Description

When using speculative decoding one-model methods, e.g., MTP, we need a pre-process before the model forward. And it will change the kv_len_cuda in the attention metadata. When capturing the CUDA graph during, there will be two warmup iterations using the same inputs. So the attention metadata will be changed after each forward.

To fix this, I added a post-process after the model forward and only applied this post-process in the CUDA graph capture to avoid bringing performance overhead.

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@lfr-0531 lfr-0531 requested a review from a team as a code owner August 29, 2025 10:26
Copy link
Contributor

coderabbitai bot commented Aug 29, 2025

📝 Walkthrough

Walkthrough

Updates introduce a postprocessing callback into CUDA graph capture and integrate it with the model engine. The CUDA runner’s capture method now accepts a postprocess function and invokes it during warmup and capture. The model engine adds a private input postprocessing method and wires it into capture flows for standard and MoE paths under specific feature flags.

Changes

Cohort / File(s) Summary of changes
CUDA Graph Runner API & Flow
tensorrt_llm/_torch/pyexecutor/cuda_graph_runner.py
Extends capture signature to include postprocess_fn. Calls postprocess_fn(capture_inputs) after each warmup forward and after the main forward within CUDA graph capture. Other capture logic unchanged.
Model Engine Capture Integration
tensorrt_llm/_torch/pyexecutor/model_engine.py
Adds _postprocess_inputs(inputs) to revert preprocessing mutations (position_ids and kv_lens) using stored offsets when kv_cache_manager is present. Defines capture_postprocess_fn to invoke this method and integrates it into capture for standard and MoE flows when speculative decoding is enabled and overlap scheduler is active.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Caller
  participant ModelEngine
  participant CUDAGraphRunner as CUDA Graph Runner
  participant CUDAGraph as CUDA Graph
  participant KV as KVCacheManager

  Caller->>ModelEngine: request forward (capture-enabled)
  ModelEngine->>CUDAGraphRunner: capture(batch_size, forward_fn, postprocess_fn, initial_inputs)

  rect rgb(245,245,255)
    note over CUDAGraphRunner: Warmup loop
    loop warmup steps
      CUDAGraphRunner->>ModelEngine: forward_fn(inputs)
      ModelEngine-->>CUDAGraphRunner: outputs
      CUDAGraphRunner->>ModelEngine: postprocess_fn(inputs)
      ModelEngine->>ModelEngine: _postprocess_inputs(inputs)
    end
  end

  rect rgb(240,255,240)
    note over CUDAGraphRunner,CUDAGraph: CUDA graph capture
    CUDAGraphRunner->>CUDAGraph: begin_capture
    CUDAGraphRunner->>ModelEngine: forward_fn(inputs)
    ModelEngine-->>CUDAGraphRunner: outputs
    CUDAGraphRunner->>ModelEngine: postprocess_fn(inputs)
    ModelEngine->>ModelEngine: _postprocess_inputs(inputs)
    CUDAGraphRunner->>CUDAGraph: end_capture
  end

  Note over ModelEngine,KV: _postprocess_inputs adjusts position_ids and kv_lens using stored offsets (when KV present)
  CUDAGraphRunner-->>ModelEngine: graph & static outputs
  ModelEngine-->>Caller: outputs
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Suggested reviewers

  • litaotju

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbit in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbit in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbit gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbit read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbit help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbit ignore or @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbit summary or @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbit or @coderabbitai title anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
tensorrt_llm/_torch/pyexecutor/cuda_graph_runner.py (1)

1-1: Add NVIDIA copyright header.

Per coding guidelines, prepend the 2025 NVIDIA copyright header.

Apply:

+# Copyright (c) 2025, NVIDIA Corporation. All rights reserved.
tensorrt_llm/_torch/pyexecutor/model_engine.py (1)

1-1: Add NVIDIA copyright header.

Per coding guidelines, prepend the 2025 NVIDIA copyright header.

Apply:

+# Copyright (c) 2025, NVIDIA Corporation. All rights reserved.
🧹 Nitpick comments (2)
tensorrt_llm/_torch/pyexecutor/cuda_graph_runner.py (1)

182-188: Postprocess placement is correct; add a defensive guard.

Call the postprocess only if provided (future-proofing).

Apply:

-            for _ in range(self.WARMUP_STEPS):
-                forward_fn(capture_inputs)
-                postprocess_fn(capture_inputs)
+            for _ in range(self.WARMUP_STEPS):
+                forward_fn(capture_inputs)
+                if postprocess_fn:
+                    postprocess_fn(capture_inputs)
             with torch.cuda.graph(graph, pool=self.memory_pool):
                 output = forward_fn(capture_inputs)
-            postprocess_fn(capture_inputs)
+            if postprocess_fn:
+                postprocess_fn(capture_inputs)
tensorrt_llm/_torch/pyexecutor/model_engine.py (1)

1128-1147: Good fix; clarify contract and ensure style compliance.

Add return annotation and Google-style docstring per guidelines.

Apply:

-    def _postprocess_inputs(self, inputs: Dict[str, Any]):
-        """
-        Postprocess to make sure model forward doesn't change the inputs.
-        It is only used in cuda graph capture, because other cases will prepare
-        new inputs before the model forward.
-        """
+    def _postprocess_inputs(self, inputs: Dict[str, Any]) -> None:
+        """Revert preprocessing mutations applied during CUDA graph capture.
+
+        Args:
+            inputs: The model input dict mutated by _preprocess_inputs; must contain
+                'attn_metadata', 'input_ids', and 'position_ids'.
+        """

Would you like a small unit test that asserts position_ids and kv_lens_cuda are unchanged across two warmup forwards during capture?

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 62459d5 and 172aa16.

📒 Files selected for processing (2)
  • tensorrt_llm/_torch/pyexecutor/cuda_graph_runner.py (2 hunks)
  • tensorrt_llm/_torch/pyexecutor/model_engine.py (2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.{cpp,cc,cxx,cu,h,hpp,hh,hxx,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.{cpp,cc,cxx,cu,h,hpp,hh,hxx,cuh,py}: Use spaces only; no tabs; indent with 4 spaces
Prepend NVIDIA copyright header (current year) to all source files (.cpp, .h, .cu, .py, etc.)

Files:

  • tensorrt_llm/_torch/pyexecutor/cuda_graph_runner.py
  • tensorrt_llm/_torch/pyexecutor/model_engine.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+
Indent Python with 4 spaces; no tabs
Preserve module namespaces when importing: from package.subpackage import foo; then call foo.SomeClass() instead of importing the class directly
Python naming: files snake_case; classes PascalCase; functions/methods snake_case; locals snake_case (prefix k_ when starting with a number); globals UPPER_SNAKE_CASE with G_ prefix; constants UPPER_SNAKE_CASE
Avoid shadowing outer-scope variables; initialize all externally visible members in init
Prefer docstrings for interfaces used outside a file; limit comments to function-internal or file-local interfaces
Use Google-style docstrings for classes and functions; document attributes/variables inline so Sphinx can render them
Avoid reflection when simpler alternatives exist; prefer explicit parameters and return dicts over locals()/dynamic tricks
In try/except, catch the narrowest exceptions possible; keep try bodies minimal and use else for the main logic when doing duck-typing checks

Files:

  • tensorrt_llm/_torch/pyexecutor/cuda_graph_runner.py
  • tensorrt_llm/_torch/pyexecutor/model_engine.py
🧬 Code graph analysis (1)
tensorrt_llm/_torch/pyexecutor/model_engine.py (2)
tensorrt_llm/_torch/pyexecutor/cuda_graph_runner.py (2)
  • enable_spec_decode (52-53)
  • capture (139-191)
tensorrt_llm/_torch/attention_backend/interface.py (6)
  • num_seqs (255-259)
  • num_contexts (205-206)
  • num_contexts (209-212)
  • num_generations (215-216)
  • num_generations (219-222)
  • num_ctx_tokens (273-274)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (3)
tensorrt_llm/_torch/pyexecutor/model_engine.py (3)

2215-2221: Capture forward wrapper looks correct.

Wrapping _forward_step preserves existing behavior; no issues spotted.


2222-2224: Scoped postprocess is appropriate.

Keeping the revert in a separate callback keeps replay hot path clean.


2225-2229: All cuda_graph_runner.capture calls have been updated to the new signature, and no additional usages were found.

Comment on lines 139 to 141
def capture(self, batch_size: int, forward_fn: Callable,
initial_inputs: Dict[str, Any]):
postprocess_fn: Callable, initial_inputs: Dict[str, Any]):
"""Captures the forward pass for a given batch size."""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

API change: make postprocess param explicit and align types with returned outputs.

  • The new parameter is good. Add a precise type for it. Also, capture now stores/returns dict outputs; update type hints accordingly and ensure replay returns the concrete object, not a wrapper.

Apply:

-    def capture(self, batch_size: int, forward_fn: Callable,
-                postprocess_fn: Callable, initial_inputs: Dict[str, Any]):
+    def capture(
+        self,
+        batch_size: int,
+        forward_fn: Callable[[Dict[str, Any]], Dict[str, Any]],
+        postprocess_fn: Callable[[Dict[str, Any]], None],
+        initial_inputs: Dict[str, Any],
+    ) -> None:
-        self.graph_outputs: Dict[Tuple[int, int],
-                                 Callable[[], Optional[torch.Tensor]]] = {}
+        self.graph_outputs: Dict[
+            Tuple[int, int], Callable[[], Optional[Dict[str, Any]]]
+        ] = {}
-    def replay(self, batch_size: int,
-               current_inputs: Dict[str, Any]) -> Optional[torch.Tensor]:
+    def replay(
+        self, batch_size: int, current_inputs: Dict[str, Any]
+    ) -> Optional[Dict[str, Any]]:
-        output_ref = self.graph_outputs[key]
-
-        return output_ref
+        output_ref = self.graph_outputs[key]
+        return output_ref() if callable(output_ref) else output_ref

Run to confirm no other callsites of capture need changes:

Also applies to: 45-47, 193-201


🏁 Script executed:

#!/bin/bash
rg -nP -C2 '\bCUDAGraphRunner\b.*\bcapture\s*\('

Length of output: 50


🏁 Script executed:

#!/bin/bash
# Search for any calls to .capture( across the repository
rg -nP -C2 '\.capture\s*\(' .

Length of output: 7724


Include explicit postprocess_fn in all CUDAGraphRunner.capture callsites
All existing calls (e.g. in tests/unittest/_torch/modeling/*.py and model_engine.py) must pass a postprocess_fn: Callable[[Dict[str, Any]], None] before initial_inputs (for tests, use lambda _: None). This ensures the API change aligns with updated type hints and prevents test failures.

🤖 Prompt for AI Agents
In tensorrt_llm/_torch/pyexecutor/cuda_graph_runner.py around lines 139 to 141,
the capture signature now requires an explicit postprocess_fn parameter but
existing callsites don’t pass it; update every CUDAGraphRunner.capture(...) call
(including tests in tests/unittest/_torch/modeling/*.py and model_engine.py) to
pass a postprocess_fn: Callable[[Dict[str, Any]], None] as the argument
immediately before initial_inputs — in tests use a no-op lambda like `lambda _:
None`; ensure all updated callsites match the new parameter order and type to
satisfy the updated type hints and prevent test failures.

Signed-off-by: Fanrong Li <[email protected]>
Copy link
Collaborator

@QiJune QiJune left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@lfr-0531
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16984 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16984 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #12752 completed with status: 'FAILURE'

Signed-off-by: Fanrong Li <[email protected]>
@lfr-0531 lfr-0531 changed the base branch from main to release/1.1.0rc2 September 1, 2025 14:03
@lfr-0531
Copy link
Collaborator Author

lfr-0531 commented Sep 1, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17224 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17224 [ run ] completed with state SUCCESS
/LLM/release-1.1.0rc2/L0_MergeRequest_PR pipeline #6 completed with status: 'FAILURE'

@lfr-0531
Copy link
Collaborator Author

lfr-0531 commented Sep 2, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17273 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17273 [ run ] completed with state SUCCESS
/LLM/release-1.1.0rc2/L0_MergeRequest_PR pipeline #10 completed with status: 'FAILURE'

@lfr-0531
Copy link
Collaborator Author

lfr-0531 commented Sep 2, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17320 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17320 [ run ] completed with state SUCCESS
/LLM/release-1.1.0rc2/L0_MergeRequest_PR pipeline #16 completed with status: 'FAILURE'

Signed-off-by: Fanrong Li <[email protected]>
@lfr-0531
Copy link
Collaborator Author

lfr-0531 commented Sep 2, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17350 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17350 [ run ] completed with state SUCCESS
/LLM/release-1.1.0rc2/L0_MergeRequest_PR pipeline #18 completed with status: 'FAILURE'

@lfr-0531
Copy link
Collaborator Author

lfr-0531 commented Sep 3, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17451 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17451 [ run ] completed with state FAILURE
/LLM/release-1.1.0rc2/L0_MergeRequest_PR pipeline #35 completed with status: 'FAILURE'

@lfr-0531
Copy link
Collaborator Author

lfr-0531 commented Sep 3, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17519 [ run ] triggered by Bot

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants