Skip to content

Conversation

huydhn
Copy link
Contributor

@huydhn huydhn commented Jul 2, 2025

Essential Elements of an Effective PR Description Checklist

  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results

Purpose

Update vLLM to PyTorch 2.8.0 now that it has been release

Test Plan

CI

Test Result

There are some failures, I'm trying to evaluate each one to confirm that they are existing failures from main.

The benchmark results look normal.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @huydhn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request initiates the process of upgrading the project's PyTorch dependency to version 2.8.0, targeting its release candidate builds. This involves updating not only the core PyTorch version but also ensuring compatibility across related libraries like Torchaudio and Torchvision, and adjusting build and test environments to source these new versions from PyTorch's test indices. The changes are foundational, impacting various dependency management files and build configurations.

Highlights

  • PyTorch Version Update: The core PyTorch dependency has been updated from version 2.7.0 to 2.8.0 across pyproject.toml, requirements/build.txt, requirements/cpu.txt, requirements/cuda.txt, requirements/rocm-build.txt, and requirements/test.in.
  • Associated Library Updates: Torchaudio and Torchvision have been updated to their 2.8.0 and 0.23.0 compatible versions, respectively, to align with the new PyTorch release.
  • Test Index Configuration: Build and test configurations, including .pre-commit-config.yaml, docker/Dockerfile, and various requirements/*.txt files, have been adjusted to pull PyTorch and related packages from the test release candidate index URLs (e.g., https://download.pytorch.org/whl/test/cu128).
  • CUDA and ROCm Dependency Alignment: Specific CUDA-related dependencies in requirements/test.txt (e.g., nvidia-cublas, nvidia-cudnn) have been updated to align with the PyTorch 2.8.0 release candidate, and ROCm versions in requirements/rocm-build.txt are updated to 6.3.
  • FlashInfer and Triton Updates: The FlashInfer wheel URL in docker/Dockerfile has been updated to reflect PyTorch 2.8, and the Triton version in requirements/test.txt has been bumped from 3.3.0 to 3.4.0.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify mergify bot added ci/build rocm Related to AMD ROCm labels Jul 2, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates PyTorch to version 2.8.0 and its related dependencies like torchvision, torchaudio, and Triton. The changes are mostly version bumps in requirement files and configuration files to use the PyTorch test package index. The changes look consistent with the goal of the PR. I've found one minor issue related to redundant configuration that could be improved for better maintainability.

Copy link

github-actions bot commented Jul 2, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added documentation Improvements or additions to documentation deepseek Related to DeepSeek models frontend llama Related to Llama models multi-modality Related to multi-modality (#4194) new-model Requests to new models performance Performance-related issues qwen Related to Qwen models structured-output labels Jul 8, 2025
@mergify mergify bot added speculative-decoding v1 tpu Related to Google TPUs labels Jul 8, 2025
Copy link

mergify bot commented Jul 8, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @huydhn.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot removed tpu Related to Google TPUs needs-rebase labels Jul 8, 2025
@youkaichao
Copy link
Member

Looks we are good now? any new blockers?

@huydhn
Copy link
Contributor Author

huydhn commented Aug 29, 2025

Looks we are good now? any new blockers?

Yes, this is ready to land now if you could stamp it. There is no other blocker. We have ironed out all the known issues.

Copy link
Member

@youkaichao youkaichao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the great work!

@youkaichao youkaichao merged commit 67c1490 into vllm-project:main Aug 29, 2025
70 checks passed
@lgeiger
Copy link
Contributor

lgeiger commented Aug 29, 2025

This introduced a new deprecation warning for me:

Warning: TORCH_NCCL_AVOID_RECORD_STREAMS is the default now, this environment variable is thus deprecated. (function operator())

Should we remove the environment variable from vllm or is there still a need to support older versions of PyTorch?

# torch.distributed.all_reduce does not free the input tensor until
# the synchronization point. This causes the memory usage to grow
# as the number of all_reduce calls increases. This env var disables
# this behavior.
# Related issue:
# https://discuss.pytorch.org/t/cuda-allocation-lifetime-for-inputs-to-distributed-all-reduce/191573
os.environ["TORCH_NCCL_AVOID_RECORD_STREAMS"] = "1"

@DarkLight1337
Copy link
Member

This PR is causing the CI failure in Basic Models test: https://buildkite.com/vllm/ci/builds/28843/steps/canvas?jid=0198f5cd-d810-42bf-8560-c5ef36e6898c

@zou3519
Copy link
Collaborator

zou3519 commented Aug 29, 2025

@zou3519
Copy link
Collaborator

zou3519 commented Aug 29, 2025

This PR is causing the CI failure in Basic Models test: https://buildkite.com/vllm/ci/builds/28843/steps/canvas?jid=0198f5cd-d810-42bf-8560-c5ef36e6898c

So my understanding of this is:

  1. In PyTorch 2.8, this just makes a very loud error out of something that was not supported before (that is, it will crash if it runs for too long).
  2. Something else in vLLM changed recently to cause this failure. I'm digging through the commit history now.

If we want to unblock CI signals now, we could just skip the single test

@nWEIdia
Copy link

nWEIdia commented Aug 29, 2025

Perhaps a rebase should have been done first?
@seemethere enabled aarch64 docker image build recently, current this PR also breaks aarch64 docker image build.
Please see: https://buildkite.com/vllm/release/builds/7768#0198f57a-b3e2-4dce-a2df-83c69a71813b

@huydhn
Copy link
Contributor Author

huydhn commented Aug 29, 2025

Let's try to forward fix these issues if possible. I could take a look at the Docker image build later after trying to bisect the basic model test failure

@zou3519
Copy link
Collaborator

zou3519 commented Aug 29, 2025

This PR is causing the CI failure in Basic Models test: https://buildkite.com/vllm/ci/builds/28843/steps/canvas?jid=0198f5cd-d810-42bf-8560-c5ef36e6898c

#23897 should fix this failure (I verified this locally), but root cause needs to be investigated more

@huydhn
Copy link
Contributor Author

huydhn commented Aug 29, 2025

PR to update the release pipeline #23960 to close the loop. It also fixes the arm64 build issue

@vadimkantorov
Copy link

Hi! Are there any nighlies or release v0.10.1.1 which build against pytorch 2.8.0?

E.g. v0.10.1.1 was out already after 2.8.0 was released

Maybe are there 2.8.0-nightlies as suggested at https://docs.vllm.ai/en/v0.5.5/getting_started/installation.html#install-with-pip?

@hmellor
Copy link
Member

hmellor commented Aug 30, 2025

@vadimkantorov yes the nightlies from now on should include PyTorch 2.8

@vadimkantorov
Copy link

Are nightlies deleted over time? I'm still struggling to figure out a URL - the docs should provide concrete URL examples, not just templates using COMMIT and VERSION

Could you please advise what the URL is or where to find all currently available/published nightly wheel URLs?

Thanks :)

@hmellor
Copy link
Member

hmellor commented Aug 30, 2025

If you just want the latest nightly follow the instructions at https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html#install-the-latest-code (the docs link you provided is ancient)

@vadimkantorov
Copy link

vadimkantorov commented Aug 30, 2025

I'd like to get the direct wheel link (e.g. for placing into uv pyproject), this readme just shows pip install command, and another readme shows a template for a link, but no concrete final example (otherwise hard to grasp the correct version format: is it including dev+commit or not? Commit in short or long form?):


export VLLM_VERSION=0.5.4 # vLLM's main branch version is currently set to latest released tag
pip install https://vllm-wheels.s3.us-west-2.amazonaws.com/nightly/vllm-${VLLM_VERSION}-cp38-abi3-manylinux1_x86_64.whl
# You can also access a specific commit
# export VLLM_COMMIT=...
# pip install https://vllm-wheels.s3.us-west-2.amazonaws.com/${VLLM_COMMIT}/vllm-${VLLM_VERSION}-cp38-abi3-manylinux1_x86_64.whl

And also - how long does such a wheel kept online?

@hmellor
Copy link
Member

hmellor commented Aug 30, 2025

Sorry, I don't know the answer to that question

@nWEIdia
Copy link

nWEIdia commented Aug 30, 2025

I'm still struggling to figure out a URL

You can use this direct URL:

wget https://vllm-wheels.s3.amazonaws.com/nightly/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl
pip uninstall torch
pip install vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl
pip install pipdeptree
pipdeptree -p vllm
pip list |grep vllm

wei:~$ pip list |grep vllm
vllm 0.10.1rc2.dev397+g038e9be4e (this commit refers to: 038e9be)

pipdeptree -p vllm |grep torch
├── torch [required: ==2.8.0, installed: 2.8.0]

You can also: unzip vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl (wheel unpack does not seem to work)
and should be able to see: vllm-0.10.1rc2.dev397+g038e9be4e.dist-info
the METADATA file has: Requires-Dist: torch==2.8.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/build deepseek Related to DeepSeek models documentation Improvements or additions to documentation frontend llama Related to Llama models multi-modality Related to multi-modality (#4194) new-model Requests to new models performance Performance-related issues qwen Related to Qwen models ready ONLY add when PR is ready to merge/full CI is needed rocm Related to AMD ROCm speculative-decoding structured-output tool-calling v1
Projects
Status: Done
Status: Done
Development

Successfully merging this pull request may close these issues.