Skip to content

Conversation

dsikka
Copy link
Contributor

@dsikka dsikka commented Jul 22, 2025

Purpose

  • Support Speculators Config
  • Adds support to load models saved / converted using the Speculators repository
  • Adds support to serve Eagle3 Llama models trained + saved using the Speculators format

Summary of Changes

  • Introduces a SpeculatorsConfig to load models saved with the speculators format
  • Updates the llama_eagle3 definition to optionally add norm_before_residual - a field saved by eagle3 speculators
  • Allows loading / serving of the speculator without the requirement of providing a speculative_config as input. In order to achieve this functionality, we optionally update the model based on if the runner is a draft or not, and fill the details of the speculative_config based on the config

Speculators Config:

API

  • You can load / run speculators through the LLM Engine as well as through vllm serve
  • You only need to provide a model path to your speculator - the target model will be read from the config
  • You can still load models through the speculative_config pathway - this allows you to override any arguments in your config (such as num_speculative_tokens)
  • Note: you cannot currently override the target model - it will be read from the config

E.g

VLLM_USE_V1=1 vllm serve "nm-testing/SpeculatorLlama3-1-8B-Eagle3-converted-0717"
  • Override the number of speculative tokens from 3 to 5
VLLM_USE_V1=1 vllm serve RedHatAI/Meta-Llama-3.1-8B-Instruct-FP8-dynamic --speculative_config '{"model":"nm-testing/SpeculatorLlama3-1-8B-Eagle3-converted-0717", "num_speculative_tokens": 5, "method": "eagle3"}'

Test Plan

  • Tested target models
  1. RedHatAI/Meta-Llama-3.1-8B-Instruct-FP8-dynamic
  2. meta-llama/Meta-Llama-3.1-8B-Instruct

Test Result - GuideLLM Benchmarking:

vLLM:

VLLM_USE_V1=1 vllm serve nm-testing/SpeculatorLlama3-1-8B-Eagle3-converted-0717 --port 7600 >output_speculators_llama.tx

Per-position acceptance rate (dense target)

[0.812 0.618 0.446]
conditional
[0.812 0.761 0.722]

FP8 Quantized Target:

[0.8   0.604 0.436]
conditional
[0.8   0.755 0.723]

Follow-up

  • Qwen support
  • Eagle support

Signed-off-by: Dipika Sikka <[email protected]>
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for Speculators Config. There are a few critical issues that need to be addressed:

  1. There's a consistent misuse of self.config_dict instead of self.config in vllm/transformers_utils/configs/speculators/base.py, which will cause runtime errors.
  2. A typo in a method name update_defualts in vllm/transformers_utils/configs/speculators/eagle.py will prevent it from being called, breaking the configuration logic for Eagle-1 models.

@dsikka dsikka changed the title [Speculative Decoding] Add Speculators Config Support [Speculative Decoding] Add speculators Config Support Jul 22, 2025
@robertgshaw2-redhat
Copy link
Collaborator

👀

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mgoin mgoin self-requested a review July 22, 2025 02:56
dsikka added 3 commits July 23, 2025 01:02
Signed-off-by: Dipika Sikka <[email protected]>
Signed-off-by: Dipika Sikka <[email protected]>
@mergify mergify bot added llama Related to Llama models speculative-decoding v1 labels Jul 25, 2025
dsikka added 4 commits July 25, 2025 17:23
Signed-off-by: Dipika Sikka <[email protected]>
Signed-off-by: Dipika Sikka <[email protected]>
Signed-off-by: Dipika Sikka <[email protected]>
@dsikka dsikka changed the title [Speculative Decoding] Add speculators Config Support [Speculative Decoding] Add speculators config support Jul 26, 2025
Signed-off-by: Dipika Sikka <[email protected]>
@dsikka dsikka marked this pull request as ready for review July 26, 2025 00:16
dsikka added 2 commits July 26, 2025 00:18
Signed-off-by: Dipika Sikka <[email protected]>
Signed-off-by: Dipika Sikka <[email protected]>
Signed-off-by: Dipika Sikka <[email protected]>
Copy link
Collaborator

@aarnphm aarnphm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First round

Copy link

mergify bot commented Jul 31, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @dsikka.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Jul 31, 2025
Copy link
Collaborator

@aarnphm aarnphm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One tiny comment, otherwise LGTM

@mergify mergify bot removed the needs-rebase label Jul 31, 2025
Signed-off-by: Dipika Sikka <[email protected]>
Copy link
Member

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work!

@mgoin mgoin merged commit dfbc1f8 into vllm-project:main Aug 1, 2025
44 checks passed
wenscarl pushed a commit to wenscarl/vllm that referenced this pull request Aug 4, 2025
wenscarl pushed a commit to wenscarl/vllm that referenced this pull request Aug 4, 2025
juuice-lee pushed a commit to juuice-lee/vllm-moe.code that referenced this pull request Aug 5, 2025
x22x22 pushed a commit to x22x22/vllm that referenced this pull request Aug 5, 2025
x22x22 pushed a commit to x22x22/vllm that referenced this pull request Aug 5, 2025
x22x22 pushed a commit to x22x22/vllm that referenced this pull request Aug 5, 2025
npanpaliya pushed a commit to odh-on-pz/vllm-upstream that referenced this pull request Aug 6, 2025
jingyu-ml pushed a commit to jingyu-ml/vllm that referenced this pull request Aug 8, 2025
jinzhen-lin pushed a commit to jinzhen-lin/vllm that referenced this pull request Aug 9, 2025
noamgat pushed a commit to noamgat/vllm that referenced this pull request Aug 9, 2025
paulpak58 pushed a commit to paulpak58/vllm that referenced this pull request Aug 13, 2025
taneem-ibrahim pushed a commit to taneem-ibrahim/vllm that referenced this pull request Aug 14, 2025
BoyuanFeng pushed a commit to BoyuanFeng/vllm that referenced this pull request Aug 14, 2025
diegocastanibm pushed a commit to diegocastanibm/vllm that referenced this pull request Aug 15, 2025
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 28, 2025
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Aug 28, 2025
googlercolin pushed a commit to googlercolin/vllm that referenced this pull request Aug 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
llama Related to Llama models ready ONLY add when PR is ready to merge/full CI is needed speculative-decoding v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants