Skip to content

Conversation

wenscarl
Copy link
Contributor

@wenscarl wenscarl commented Jul 21, 2025

co-authored by: @xinli-git
patched partially #22073 to fix DSR1 weight loading.

export VLLM_USE_FLASHINFER_MOE_FP4=1
export VLLM_FLASHINFER_MOE_BACKEND="latency"
lm_eval --model vllm --model_args pretrained=nvidia/Llama-4-Scout-17B-16E-Instruct-FP4,tensor_parallel_size=4,max_model_len=2048,enforce_eager=True,kv_cache_dtype=auto --gen_kwargs temperature=0.0 --limit 500 --trust_remote_code --tasks gsm8k --num_fewshot 5 --batch_size 200

Llama4:

vllm (pretrained=nvidia/Llama-4-Scout-17B-16E-Instruct-FP4,quantization=modelopt_fp4,tensor_parallel_size=1,max_model_len=2048,enforce_eager=True,kv_cache_dtype=auto,trust_remote_code=True), gen_kwargs: (temperature=0.0), limit: 500.0, num_fewshot: 5, batch_size: 200
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value|   |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.896|±  |0.0137|
|     |       |strict-match    |     5|exact_match|↑  |0.888|±  |0.0141|

DSR1-FP4:

INFO:lm_eval.loggers.evaluation_tracker:Output path not provided, skipping saving results aggregated
vllm (pretrained=nvidia/DeepSeek-R1-FP4,quantization=modelopt_fp4,tensor_parallel_size=4,enforce_eager=True,max_model_len=2048,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: auto
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.9431|±  |0.0064|
|     |       |strict-match    |     5|exact_match|↑  |0.9424|±  |0.0064

Perf:

With TP=8, B200x8,
1k/8k
max-num-seq=4, num-req=8:
trtllm-gen:
Throughput: 0.12 requests/s, 362.62 total tokens/s, 241.74 output tokens/s
flashinfer cutlass:
Throughput: 0.09 requests/s, 285.77 total tokens/s, 190.51 output tokens/s
max-num-seq=8, num-req=16,
trtllm-gen:
Throughput: 0.04 requests/s, 371.03 total tokens/s, 329.80 output tokens/s
flashinfer cutlass:
Throughput: 0.04 requests/s, 358.38 total tokens/s, 318.56 output tokens/s
max-num-seq=16, num-req=32,
trtllm-gen:
Throughput: 0.09 requests/s, 845.02 total tokens/s, 751.13 output tokens/s
flashinfer cultass:
Throughput: 0.08 requests/s, 709.25 total tokens/s, 630.45 output tokens/s

Essential Elements of an Effective PR Description Checklist

  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

Purpose

Test Plan

Test Result

(Optional) Documentation Update

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link

mergify bot commented Jul 21, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @wenscarl.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Jul 21, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for TensorRT-LLM and FlashInfer CUTLASS FP4 MoE kernels, which is a great step for improving performance. The changes include a significant and well-structured refactoring of the MoE framework to be more modular.

However, I've found a few critical issues, primarily in the new TensorRT-LLM integration path. The implementation in flashinfer_trtllm_moe.py seems incomplete and will cause runtime errors due to undefined variables and unreachable code. Similarly, the invocation of this kernel in modelopt.py is missing arguments. These issues must be addressed before this PR can be merged.

Copy link

mergify bot commented Jul 24, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @wenscarl.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added needs-rebase deepseek Related to DeepSeek models labels Jul 24, 2025
@mergify mergify bot added ci/build and removed needs-rebase labels Jul 28, 2025
@wenscarl wenscarl force-pushed the trtllm-fp4 branch 3 times, most recently from f01e9d7 to b25d113 Compare July 28, 2025 18:55
@mergify mergify bot added the llama Related to Llama models label Jul 29, 2025
Copy link

mergify bot commented Jul 30, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @wenscarl.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

Copy link
Collaborator

@andoorve andoorve left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Took a preliminary look. I have limited context on this but in general looks good. The logic is fairly complex in some files (modelopt.py) and the MoE code is getting unwieldly overall. It would be good to look for some refactoring and simplification opportunities where possible. It would also be good to have some tests for coverage as well. There are also some places with missing return value typing.

Copy link

mergify bot commented Aug 1, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @wenscarl.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

wenscarl and others added 5 commits August 7, 2025 17:26
Signed-off-by: Shu Wang <[email protected]>
Signed-off-by: Po-Han Huang <[email protected]>
Signed-off-by: Shu Wang. <[email protected]>
Signed-off-by: Po-Han Huang <[email protected]>
Signed-off-by: Shu Wang. <[email protected]>
Signed-off-by: Po-Han Huang <[email protected]>
Signed-off-by: Xin Li. <[email protected]>

Signed-off-by: XIn Li <[email protected]>
Signed-off-by: Po-Han Huang <[email protected]>
auto-merge was automatically disabled August 8, 2025 00:27

Head branch was pushed to by a user without write access

@nvpohanh
Copy link
Contributor

nvpohanh commented Aug 8, 2025

rebased again...

@vllm-bot vllm-bot merged commit a3b9c17 into vllm-project:main Aug 8, 2025
33 of 48 checks passed
ywang96 pushed a commit to ywang96/vllm that referenced this pull request Aug 8, 2025
Signed-off-by: Shu Wang <[email protected]>
Signed-off-by: Po-Han Huang <[email protected]>
Signed-off-by: Shu Wang. <[email protected]>
Signed-off-by: XIn Li <[email protected]>
Co-authored-by: XIn Li <[email protected]>
ywang96 pushed a commit to ywang96/vllm that referenced this pull request Aug 8, 2025
jingyu-ml pushed a commit to jingyu-ml/vllm that referenced this pull request Aug 8, 2025
Signed-off-by: Shu Wang <[email protected]>
Signed-off-by: Po-Han Huang <[email protected]>
Signed-off-by: Shu Wang. <[email protected]>
Signed-off-by: XIn Li <[email protected]>
Co-authored-by: XIn Li <[email protected]>
Signed-off-by: jingyu <[email protected]>
jinzhen-lin pushed a commit to jinzhen-lin/vllm that referenced this pull request Aug 9, 2025
Signed-off-by: Shu Wang <[email protected]>
Signed-off-by: Po-Han Huang <[email protected]>
Signed-off-by: Shu Wang. <[email protected]>
Signed-off-by: XIn Li <[email protected]>
Co-authored-by: XIn Li <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
noamgat pushed a commit to noamgat/vllm that referenced this pull request Aug 9, 2025
Signed-off-by: Shu Wang <[email protected]>
Signed-off-by: Po-Han Huang <[email protected]>
Signed-off-by: Shu Wang. <[email protected]>
Signed-off-by: XIn Li <[email protected]>
Co-authored-by: XIn Li <[email protected]>
Signed-off-by: Noam Gat <[email protected]>
yyihuang pushed a commit to yyihuang/vllm that referenced this pull request Aug 11, 2025
Signed-off-by: Shu Wang <[email protected]>
Signed-off-by: Po-Han Huang <[email protected]>
Signed-off-by: Shu Wang. <[email protected]>
Signed-off-by: XIn Li <[email protected]>
Co-authored-by: XIn Li <[email protected]>
Signed-off-by: Avery Yingyi Huang <[email protected]>
aarnphm pushed a commit to aarnphm/vllm that referenced this pull request Aug 13, 2025
Signed-off-by: Shu Wang <[email protected]>
Signed-off-by: Po-Han Huang <[email protected]>
Signed-off-by: Shu Wang. <[email protected]>
Signed-off-by: XIn Li <[email protected]>
Co-authored-by: XIn Li <[email protected]>
paulpak58 pushed a commit to paulpak58/vllm that referenced this pull request Aug 13, 2025
Signed-off-by: Shu Wang <[email protected]>
Signed-off-by: Po-Han Huang <[email protected]>
Signed-off-by: Shu Wang. <[email protected]>
Signed-off-by: XIn Li <[email protected]>
Co-authored-by: XIn Li <[email protected]>
Signed-off-by: Paul Pak <[email protected]>
taneem-ibrahim pushed a commit to taneem-ibrahim/vllm that referenced this pull request Aug 14, 2025
Signed-off-by: Shu Wang <[email protected]>
Signed-off-by: Po-Han Huang <[email protected]>
Signed-off-by: Shu Wang. <[email protected]>
Signed-off-by: XIn Li <[email protected]>
Co-authored-by: XIn Li <[email protected]>
BoyuanFeng pushed a commit to BoyuanFeng/vllm that referenced this pull request Aug 14, 2025
Signed-off-by: Shu Wang <[email protected]>
Signed-off-by: Po-Han Huang <[email protected]>
Signed-off-by: Shu Wang. <[email protected]>
Signed-off-by: XIn Li <[email protected]>
Co-authored-by: XIn Li <[email protected]>
Signed-off-by: Boyuan Feng <[email protected]>
diegocastanibm pushed a commit to diegocastanibm/vllm that referenced this pull request Aug 15, 2025
Signed-off-by: Shu Wang <[email protected]>
Signed-off-by: Po-Han Huang <[email protected]>
Signed-off-by: Shu Wang. <[email protected]>
Signed-off-by: XIn Li <[email protected]>
Co-authored-by: XIn Li <[email protected]>
Signed-off-by: Diego-Castan <[email protected]>
juuice-lee pushed a commit to juuice-lee/vllm-moe.code that referenced this pull request Aug 18, 2025
Signed-off-by: Shu Wang <[email protected]>
Signed-off-by: Po-Han Huang <[email protected]>
Signed-off-by: Shu Wang. <[email protected]>
Signed-off-by: XIn Li <[email protected]>
Co-authored-by: XIn Li <[email protected]>
yiliu30 pushed a commit to yiliu30/vllm-fork that referenced this pull request Aug 19, 2025
Signed-off-by: Shu Wang <[email protected]>
Signed-off-by: Po-Han Huang <[email protected]>
Signed-off-by: Shu Wang. <[email protected]>
Signed-off-by: XIn Li <[email protected]>
Co-authored-by: XIn Li <[email protected]>
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 28, 2025
Signed-off-by: Shu Wang <[email protected]>
Signed-off-by: Po-Han Huang <[email protected]>
Signed-off-by: Shu Wang. <[email protected]>
Signed-off-by: XIn Li <[email protected]>
Co-authored-by: XIn Li <[email protected]>
xiao-llm pushed a commit to xiao-llm/vllm that referenced this pull request Aug 28, 2025
Signed-off-by: Shu Wang <[email protected]>
Signed-off-by: Po-Han Huang <[email protected]>
Signed-off-by: Shu Wang. <[email protected]>
Signed-off-by: XIn Li <[email protected]>
Co-authored-by: XIn Li <[email protected]>
Signed-off-by: Xiao Yu <[email protected]>
xiao-llm pushed a commit to xiao-llm/vllm that referenced this pull request Aug 28, 2025
Signed-off-by: Shu Wang <[email protected]>
Signed-off-by: Po-Han Huang <[email protected]>
Signed-off-by: Shu Wang. <[email protected]>
Signed-off-by: XIn Li <[email protected]>
Co-authored-by: XIn Li <[email protected]>
Signed-off-by: Xiao Yu <[email protected]>
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Aug 28, 2025
Signed-off-by: Shu Wang <[email protected]>
Signed-off-by: Po-Han Huang <[email protected]>
Signed-off-by: Shu Wang. <[email protected]>
Signed-off-by: XIn Li <[email protected]>
Co-authored-by: XIn Li <[email protected]>
dumb0002 pushed a commit to dumb0002/vllm that referenced this pull request Aug 28, 2025
Signed-off-by: Shu Wang <[email protected]>
Signed-off-by: Po-Han Huang <[email protected]>
Signed-off-by: Shu Wang. <[email protected]>
Signed-off-by: XIn Li <[email protected]>
Co-authored-by: XIn Li <[email protected]>
googlercolin pushed a commit to googlercolin/vllm that referenced this pull request Aug 29, 2025
Signed-off-by: Shu Wang <[email protected]>
Signed-off-by: Po-Han Huang <[email protected]>
Signed-off-by: Shu Wang. <[email protected]>
Signed-off-by: XIn Li <[email protected]>
Co-authored-by: XIn Li <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
deepseek Related to DeepSeek models llama Related to Llama models quantization ready ONLY add when PR is ready to merge/full CI is needed
Projects
Status: Done
Status: Done
Development

Successfully merging this pull request may close these issues.

9 participants