Skip to content

[Multimodal][Speculative Decoding]Eagle Eagle3 mm support, enablement on qwen2.5vl #22872

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

david6666666
Copy link
Contributor

@david6666666 david6666666 commented Aug 14, 2025

Purpose

follow #20788 Eagle Eagle3 mm support, enablement on qwen2.5vl
model is https://huggingface.co/Rayzl/qwen2.5-vl-7b-eagle3-sgl

Test Plan

vllm serve \
    /workspace/models/Qwen2.5-VL-7B-Instruct \
    --port 5580 --host 0.0.0.0 \
    --max-num-seqs 128 --dtype bfloat16 --max-model-len=32768  \
    --no-enable-prefix-caching --trust-remote-code -tp 4\
    --speculative-config '{"method": "eagle3", "model": "/workspace/models/qwen2.5-vl-7b-eagle3-sgl", "prefill_token_shift": false, "num_speculative_tokens": 3, "draft_tensor_parallel_size": 4, "max_model_len": 8192}' \
    --num-lookahead-slots=3 \
    --allowed-local-media-path /workspace/l00807937/ \
    --gpu-memory-utilization=0.93
python /workspace/eagle_mm/vllm-LJH/benchmarks/benchmark_serving.py \
  --backend openai-chat \
  --model /workspace/models/Qwen2.5-VL-7B-Instruct \
  --port 5580 \
  --host 127.0.0.1 \
  --endpoint /v1/chat/completions \
  --dataset-name hf \
  --dataset-path /lmarena-ai/VisionArena-Chat \
  --hf-split train \
  --seed 40 \
  --save-result \
  --save-detailed \
  --result-dir $LOG_PATH \
  --result-filename vision_arena_outputs_$(date +"%Y%m%d_%H%M%S").json \
  2>&1 | tee $LOG_PATH/benchmark_VisionArena_${NUM_PROMPTS}reqs_$(date +"%Y%m%d_%H%M%S").log

Test Result

# eagle3
============ Serving Benchmark Result ============
Successful requests:                     1000      
Benchmark duration (s):                  53.18     
Total input tokens:                      92971     
Total generated tokens:                  3366      
Request throughput (req/s):              18.81     
Output token throughput (tok/s):         63.30     
Total Token throughput (tok/s):          1811.65   
---------------Time to First Token----------------
Mean TTFT (ms):                          966.79    
Median TTFT (ms):                        0.00      
P99 TTFT (ms):                           12999.41  
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          3085.82   
Median TPOT (ms):                        1315.73   
P99 TPOT (ms):                           39158.27  
---------------Inter-token Latency----------------
Mean ITL (ms):                           739.84    
Median ITL (ms):                         647.41    
P99 ITL (ms):                            1607.11   
==================================================
# without eagle3
============ Serving Benchmark Result ============
Successful requests:                     1000      
Benchmark duration (s):                  90.38     
Total input tokens:                      92971     
Total generated tokens:                  112893    
Request throughput (req/s):              11.06     
Output token throughput (tok/s):         1249.05   
Total Token throughput (tok/s):          2277.68   
---------------Time to First Token----------------
Mean TTFT (ms):                          43495.63  
Median TTFT (ms):                        41275.69  
P99 TTFT (ms):                           87428.42  
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          89.87     
Median TPOT (ms):                        92.47     
P99 TPOT (ms):                           173.62    
---------------Inter-token Latency----------------
Mean ITL (ms):                           129.99    
Median ITL (ms):                         94.06     
P99 ITL (ms):                            1487.70   
==================================================
# pytest
# command
pytest -s ./vllm-LJH/tests/v1/e2e/test_spec_decode.py -k qwen2.5_vl_eagle3
# result
tests/v1/e2e/test_spec_decode.py::test_eagle_correctness[FLASH_ATTN_VLLM_V1-qwen2.5_vl_eagle3]
tests/v1/e2e/test_spec_decode.py::test_eagle_correctness[FLASH_ATTN_VLLM_V1-qwen2.5_vl_eagle3]
================ 1 passed, 2 skipped, 16 deselected, 5 warnings in 75.59s (0:01:15) =================

(Optional) Documentation Update


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for Eagle and Eagle3 speculative decoding for the Qwen2.5-VL multimodal model. The changes include adding new model files for the Eagle and Eagle3 variants, updating model registries, and modifying tests. I found one issue related to a buggy condition in the weight loading logic for the Eagle model which could lead to incorrect behavior. My feedback addresses this.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@@ -183,6 +191,9 @@ def test_eagle_correctness(

method, model_name, spec_model_name, tp_size = model_setup

if "Qwen2.5-VL" in model_name and attn_backend == "TREE_ATTN":
pytest.skip("TREE ATTN not support Qwen2.5-VL Model yet")
print(f"model_setup={model_setup}")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
print(f"model_setup={model_setup}")

Copy link
Member

@DarkLight1337 DarkLight1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @22quinn @morgendave can you help review?

@LJH-LBJ LJH-LBJ force-pushed the Eagle-mulitmodal-support-Qwen2.5vl branch from af49ffc to 9d06a8d Compare August 14, 2025 05:11
@LJH-LBJ LJH-LBJ force-pushed the Eagle-mulitmodal-support-Qwen2.5vl branch from e0fd906 to 8874e16 Compare August 14, 2025 12:32
@22quinn
Copy link
Collaborator

22quinn commented Aug 16, 2025

cc @22quinn @morgendave can you help review?

cc spec decode experts @zixi-qi @charlotte12l

Copy link

mergify bot commented Aug 20, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @david6666666.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Aug 20, 2025
@LJH-LBJ LJH-LBJ force-pushed the Eagle-mulitmodal-support-Qwen2.5vl branch from 828fc1e to f8af6b8 Compare August 21, 2025 11:27
Signed-off-by: Junhong <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-rebase new-model Requests to new models qwen Related to Qwen models speculative-decoding v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants