Skip to content

Conversation

xxi-nv
Copy link
Collaborator

@xxi-nv xxi-nv commented Aug 29, 2025

Signed-off-by: xxi [email protected]

modified:   tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
new file:   tensorrt_llm/_torch/modules/fused_moe/moe_backend.py
modified:   tests/unittest/_torch/modules/test_fused_moe.py

Summary by CodeRabbit

  • New Features

    • Introduces a pluggable MoE backend with automatic, hardware-aware selection.
    • Adds lazy backend initialization and preserves original (unpadded) hidden size per layer.
    • Exposes a public accessor to retrieve the selected backend.
  • Performance

    • Optimizes FP8 blockwise quantization on next‑gen GPUs, with backend-dispatched execution and improved tuning controls.
    • Simplifies runtime flags while maintaining behavior across all-to-all modes.
  • Tests

    • Adds an end-to-end test validating FP8 blockwise MoE outputs across different all-to-all configurations.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Signed-off-by: xxi <[email protected]>

	modified:   tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
	new file:   tensorrt_llm/_torch/modules/fused_moe/moe_backend.py
	modified:   tests/unittest/_torch/modules/test_fused_moe.py
@xxi-nv xxi-nv requested a review from a team as a code owner August 29, 2025 12:25
@xxi-nv xxi-nv requested review from QiJune and kaiyux August 29, 2025 12:25
@xxi-nv
Copy link
Collaborator Author

xxi-nv commented Aug 29, 2025

/bot run

Copy link
Contributor

coderabbitai bot commented Aug 29, 2025

📝 Walkthrough

Walkthrough

Adds a pluggable MoE backend layer and integrates it into WideEPMoE. The module now lazily selects and initializes a backend (Cutlass or DeepGemm) and routes MoE execution through backend.run_moe. Introduces backend selection logic (SM100 + FP8 block scales -> DeepGemm), per-layer unpadded size, and new FP8 tests; deduplicated flags and call parameters.

Changes

Cohort / File(s) Summary
Backend framework
tensorrt_llm/_torch/modules/fused_moe/moe_backend.py
Introduces MoEBackend interface with Cutlass and DeepGemm implementations, tactic finalization, compute/run APIs, DeepGemm workspace and permutation, and MoEBackendSelection with SM100+FP8 block scales routing.
WideEPMoE integration
tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
Replaces direct fused_moe calls with lazy backend.run_moe; adds moe_backend_impl property and internal holder; records unpadded_hidden_size; updates FP8 block-scales path: DeepGemm only on SM100; streamlines forward arguments and logging.
Tests
tests/unittest/_torch/modules/test_fused_moe.py
Adds FP8 blockwise WideEPMoE vs DeepGemm reference test across MPI ranks; param over alltoall methods; includes local FP8 weight quantization and relaxed comparison. Note: duplicate test definition appears.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant W as WideEPMoE
  participant S as MoEBackendSelection
  participant C as MoECutlassBackend
  participant D as MoEDeepGemmBackend
  participant B as Backend (selected)
  participant K as Kernels/Libs

  Note over W: Forward called (inputs, weights, quant scales)
  W->>W: Ensure weights created
  W->>S: select_backend(self)
  alt SM100 && FP8 block scales
    S-->>W: DeepGemm backend
    W->>D: lazy init (once)
    activate D
    W->>D: run_moe(...)
    D->>K: permute, grouped FP8 GEMMs
    D->>K: finalize scale (fused or separate)
    K-->>D: output
    D-->>W: output
    deactivate D
  else
    S-->>W: Cutlass backend
    W->>C: lazy init (once)
    activate C
    W->>C: run_moe(...)
    C->>K: autotune/finalize tactic (cached)
    C->>K: fused_moe compute
    K-->>C: output
    C-->>W: output
    deactivate C
  end
  W-->>W: Return output tensor
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested labels

SW Architecture

Suggested reviewers

  • QiJune
  • yuxianq
  • litaotju
  • dongxuy04
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbit in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbit in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbit gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbit read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbit help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbit ignore or @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbit summary or @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbit or @coderabbitai title anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (6)
tensorrt_llm/_torch/modules/fused_moe/moe_backend.py (2)

1-4: Add NVIDIA copyright header.

Per repo guidelines, prepend the current-year NVIDIA copyright header at the top of all Python sources.

+# Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES.  All rights reserved.
+#
+""" 
-MoE Backend abstraction for supporting different MoE computation implementations.
-This module provides a unified interface for different MoE backends (Cutlass, DeepGemm, etc.)
-"""
+MoE Backend abstraction for supporting different MoE computation implementations.
+This module provides a unified interface for different MoE backends (Cutlass, DeepGemm, etc.)
+"""

348-348: Wrap long line (E501).

Split the conditional selection of run_moe to satisfy 120-char limit.

-        run_moe = self.moe_runner.fused_moe_runner.run_moe_min_latency if min_latency_mode else self.moe_runner.fused_moe_runner.run_moe
+        run_moe = (self.moe_runner.fused_moe_runner.run_moe_min_latency
+                   if min_latency_mode
+                   else self.moe_runner.fused_moe_runner.run_moe)
tests/unittest/_torch/modules/test_fused_moe.py (1)

641-791: Assert backend selection for Blackwell + FP8 block-scales.

Add an assertion to ensure WideEPMoE actually picks DeepGemm backend on SM100 when FP8 block scales are used. This guards the selection heuristic.

         with mock.patch.object(WideEPMoE,
                                "select_alltoall_method_type",
                                return_value=alltoall_method_type):
             alltoall_model = WideEPMoE(
@@
             )
         alltoall_model.to("cuda")
         alltoall_model.load_weights([weights])
+        # Ensure DeepGemm backend is selected on Blackwell with FP8 block scales
+        from tensorrt_llm._torch.modules.fused_moe.moe_backend import MoEDeepGemmBackend
+        assert isinstance(alltoall_model.moe_backend_impl, MoEDeepGemmBackend)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (3)

1-4: Add NVIDIA copyright header.

This file lacks the required header. Please prepend it.

+# Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES.  All rights reserved.
+
 import os
 from enum import IntEnum
 from typing import Dict, List, Optional, Tuple, Union

322-331: Backend-specific quant method duplication.

You now select DeepGemm's FP8 block-scales path here and again in MoEBackendSelection. Duplication risks drift. Consider centralizing the SM100+block-FP8 decision in one place (e.g., keep selection here for quant weights and have backend selection consult the same predicate).


569-571: Intentional no-op for FP8 block scales pre-quantization.

Add a short comment to clarify that input quantization is handled inside the selected backend (DeepGemm path), to avoid future regressions.

-            elif self.has_deepseek_fp8_block_scales:
-                pass
+            elif self.has_deepseek_fp8_block_scales:
+                # FP8 block scales: input quantization is performed inside the backend (e.g., DeepGemm path).
+                pass
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 62459d5 and 7527a70.

📒 Files selected for processing (3)
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (9 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/moe_backend.py (1 hunks)
  • tests/unittest/_torch/modules/test_fused_moe.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.{cpp,cc,cxx,cu,h,hpp,hh,hxx,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.{cpp,cc,cxx,cu,h,hpp,hh,hxx,cuh,py}: Use spaces only; no tabs; indent with 4 spaces
Prepend NVIDIA copyright header (current year) to all source files (.cpp, .h, .cu, .py, etc.)

Files:

  • tests/unittest/_torch/modules/test_fused_moe.py
  • tensorrt_llm/_torch/modules/fused_moe/moe_backend.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+
Indent Python with 4 spaces; no tabs
Preserve module namespaces when importing: from package.subpackage import foo; then call foo.SomeClass() instead of importing the class directly
Python naming: files snake_case; classes PascalCase; functions/methods snake_case; locals snake_case (prefix k_ when starting with a number); globals UPPER_SNAKE_CASE with G_ prefix; constants UPPER_SNAKE_CASE
Avoid shadowing outer-scope variables; initialize all externally visible members in init
Prefer docstrings for interfaces used outside a file; limit comments to function-internal or file-local interfaces
Use Google-style docstrings for classes and functions; document attributes/variables inline so Sphinx can render them
Avoid reflection when simpler alternatives exist; prefer explicit parameters and return dicts over locals()/dynamic tricks
In try/except, catch the narrowest exceptions possible; keep try bodies minimal and use else for the main logic when doing duck-typing checks

Files:

  • tests/unittest/_torch/modules/test_fused_moe.py
  • tensorrt_llm/_torch/modules/fused_moe/moe_backend.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
🧠 Learnings (1)
📚 Learning: 2025-08-14T06:36:40.701Z
Learnt from: timlee0212
PR: NVIDIA/TensorRT-LLM#6886
File: tensorrt_llm/_torch/models/modeling_deepseekv3.py:0-0
Timestamp: 2025-08-14T06:36:40.701Z
Learning: In DeepSeek V3 model (tensorrt_llm/_torch/models/modeling_deepseekv3.py), the disagreement between AllReduce.__init__ guard and _compute_mlp_tp_size logic for MNNVL usage is expected by design. The AllReduce component and MLP TP-size computation intentionally use different criteria for MNNVL availability decisions.

Applied to files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
🧬 Code graph analysis (3)
tests/unittest/_torch/modules/test_fused_moe.py (5)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (1)
  • AlltoallMethodType (30-38)
tensorrt_llm/mapping.py (1)
  • Mapping (32-513)
tensorrt_llm/quantization/utils/fp8_utils.py (1)
  • per_block_cast_to_fp8_e8m0 (54-79)
tests/unittest/_torch/helpers.py (1)
  • per_block_cast_to_fp8_e8m0 (55-68)
tensorrt_llm/models/modeling_utils.py (2)
  • QuantConfig (128-268)
  • quant_algo (547-548)
tensorrt_llm/_torch/modules/fused_moe/moe_backend.py (4)
tensorrt_llm/_torch/modules/fused_moe/interface.py (2)
  • MoE (22-181)
  • has_deepseek_fp8_block_scales (127-130)
tensorrt_llm/_torch/autotuner.py (2)
  • AutoTuner (271-752)
  • choose_one (329-436)
tensorrt_llm/_torch/custom_ops/torch_custom_ops.py (1)
  • MoERunner (27-121)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_deepgemm.py (5)
  • deepgemm_fp8_group_blockwise_gemm (298-336)
  • masked_index_copy_group_quant_fp8 (88-159)
  • preprocess_after_permute (259-294)
  • set_strides (339-345)
  • triton_masked_index_gather (194-215)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (2)
tensorrt_llm/_torch/modules/fused_moe/moe_backend.py (5)
  • MoEBackend (17-177)
  • MoEBackendSelection (750-793)
  • select_backend (762-793)
  • run_moe (109-177)
  • run_moe (379-461)
tensorrt_llm/_torch/modules/fused_moe/quantization.py (2)
  • DeepSeekFP8BlockScalesFusedMoEMethod (604-737)
  • DeepSeekFP8BlockScalesFusedMoEMethodDeepGemm (740-781)
🪛 Ruff (0.12.2)
tensorrt_llm/_torch/modules/fused_moe/moe_backend.py

322-322: Local variable use_deepseek_fp8_block_scale is assigned to but never used

Remove assignment to unused variable use_deepseek_fp8_block_scale

(F841)


326-326: Local variable use_int8_woq_per_channel is assigned to but never used

Remove assignment to unused variable use_int8_woq_per_channel

(F841)


348-348: Line too long (136 > 120)

(E501)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (2)
tensorrt_llm/_torch/modules/fused_moe/moe_backend.py (1)

217-241: weight_dtype correctly reflects storage dtype
MoERunner intentionally uses the original tensor’s dtype for weight_dtype (storage layout) and only applies weight_view_dtype when profiling via AutoTuner; the FusedMoeRunner flags (use_w4_group_scaling, use_int8_woq_per_channel, etc.) cover all quant modes.

tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (1)

661-678: Plumbing looks correct; good switch to backend.run_moe.

Arguments forwarded (weights, scales, input_sf, tuner_* and fused finalize) align with the new backend contract.

Comment on lines +157 to +158
self.finalize_tactic(module, input, output_dtype, min_latency_mode,
tuner_top_k)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Bug: finalize_tactic receives tuner_top_k as use_fused_finalize due to positional args.

Base run_moe passes the 5th positional argument into the 5th parameter of MoECutlassBackend.finalize_tactic, which is use_fused_finalize, not tuner_top_k. Use keywords to avoid misbinding.

-        self.finalize_tactic(module, input, output_dtype, min_latency_mode,
-                             tuner_top_k)
+        self.finalize_tactic(
+            module,
+            input,
+            output_dtype,
+            min_latency_mode=min_latency_mode,
+            tuner_top_k=tuner_top_k,
+        )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
self.finalize_tactic(module, input, output_dtype, min_latency_mode,
tuner_top_k)
self.finalize_tactic(
module,
input,
output_dtype,
min_latency_mode=min_latency_mode,
tuner_top_k=tuner_top_k,
)
🤖 Prompt for AI Agents
In tensorrt_llm/_torch/modules/fused_moe/moe_backend.py around lines 157-158,
the call self.finalize_tactic(module, input, output_dtype, min_latency_mode,
tuner_top_k) incorrectly binds tuner_top_k into the use_fused_finalize parameter
via positional args; change the call to use explicit keyword arguments for
clarity and correctness (e.g., pass module=..., input=..., output_dtype=...,
min_latency_mode=..., tuner_top_k=tuner_top_k and set use_fused_finalize
explicitly if needed) so tuner_top_k is passed into the correct parameter.

Comment on lines +322 to +328
use_deepseek_fp8_block_scale = getattr(module,
'has_deepseek_fp8_block_scales',
False)
use_w4_group_scaling = getattr(module, 'has_w4afp8', False)
use_int8_woq_per_channel = getattr(module, 'has_int8_woq_per_channel',
False)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove unused locals flagged by Ruff F841.

use_deepseek_fp8_block_scale and use_int8_woq_per_channel are computed but never used.

-        use_deepseek_fp8_block_scale = getattr(module,
-                                               'has_deepseek_fp8_block_scales',
-                                               False)
-        use_w4_group_scaling = getattr(module, 'has_w4afp8', False)
-        use_int8_woq_per_channel = getattr(module, 'has_int8_woq_per_channel',
-                                           False)
+        use_w4_group_scaling = getattr(module, 'has_w4afp8', False)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
use_deepseek_fp8_block_scale = getattr(module,
'has_deepseek_fp8_block_scales',
False)
use_w4_group_scaling = getattr(module, 'has_w4afp8', False)
use_int8_woq_per_channel = getattr(module, 'has_int8_woq_per_channel',
False)
use_w4_group_scaling = getattr(module, 'has_w4afp8', False)
🧰 Tools
🪛 Ruff (0.12.2)

322-322: Local variable use_deepseek_fp8_block_scale is assigned to but never used

Remove assignment to unused variable use_deepseek_fp8_block_scale

(F841)


326-326: Local variable use_int8_woq_per_channel is assigned to but never used

Remove assignment to unused variable use_int8_woq_per_channel

(F841)

🤖 Prompt for AI Agents
In tensorrt_llm/_torch/modules/fused_moe/moe_backend.py around lines 322 to 328,
two locals (use_deepseek_fp8_block_scale and use_int8_woq_per_channel) are
computed but never used; remove the unused getattr assignments (or if they were
intended for future checks, replace them with direct getattr calls at the point
of use) so that only necessary variables remain—delete the two unused lines or
consolidate any needed flags into actual conditional logic where they are used.

Comment on lines +441 to +443
self.finalize_tactic(module, tuner_input, output_dtype,
min_latency_mode, tuner_top_k)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Bug: Same finalize_tactic misbinding in Cutlass.run_moe.

tuner_top_k is being passed positionally into use_fused_finalize. Pass by keyword (and wire use_fused_finalize).

-        self.finalize_tactic(module, tuner_input, output_dtype,
-                             min_latency_mode, tuner_top_k)
+        self.finalize_tactic(
+            module,
+            tuner_input,
+            output_dtype,
+            min_latency_mode=min_latency_mode,
+            use_fused_finalize=use_fused_finalize,
+            tuner_top_k=tuner_top_k,
+        )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
self.finalize_tactic(module, tuner_input, output_dtype,
min_latency_mode, tuner_top_k)
self.finalize_tactic(
module,
tuner_input,
output_dtype,
min_latency_mode=min_latency_mode,
use_fused_finalize=use_fused_finalize,
tuner_top_k=tuner_top_k,
)
🤖 Prompt for AI Agents
In tensorrt_llm/_torch/modules/fused_moe/moe_backend.py around lines 441 to 443,
the call to finalize_tactic passes tuner_top_k positionally which misbinds it
into the next parameter (use_fused_finalize); change the call to pass
tuner_top_k as a keyword (tuner_top_k=tuner_top_k) and also explicitly pass
use_fused_finalize by keyword (use_fused_finalize=use_fused_finalize) so the
values are correctly wired into finalize_tactic and onward to
use_fused_finalize.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant