-
-
Notifications
You must be signed in to change notification settings - Fork 9.8k
Update PyTorch to 2.8.0 #20358
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update PyTorch to 2.8.0 #20358
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @huydhn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request initiates the process of upgrading the project's PyTorch dependency to version 2.8.0, targeting its release candidate builds. This involves updating not only the core PyTorch version but also ensuring compatibility across related libraries like Torchaudio and Torchvision, and adjusting build and test environments to source these new versions from PyTorch's test indices. The changes are foundational, impacting various dependency management files and build configurations.
Highlights
- PyTorch Version Update: The core PyTorch dependency has been updated from version 2.7.0 to 2.8.0 across
pyproject.toml
,requirements/build.txt
,requirements/cpu.txt
,requirements/cuda.txt
,requirements/rocm-build.txt
, andrequirements/test.in
. - Associated Library Updates: Torchaudio and Torchvision have been updated to their 2.8.0 and 0.23.0 compatible versions, respectively, to align with the new PyTorch release.
- Test Index Configuration: Build and test configurations, including
.pre-commit-config.yaml
,docker/Dockerfile
, and variousrequirements/*.txt
files, have been adjusted to pull PyTorch and related packages from thetest
release candidate index URLs (e.g.,https://download.pytorch.org/whl/test/cu128
). - CUDA and ROCm Dependency Alignment: Specific CUDA-related dependencies in
requirements/test.txt
(e.g.,nvidia-cublas
,nvidia-cudnn
) have been updated to align with the PyTorch 2.8.0 release candidate, and ROCm versions inrequirements/rocm-build.txt
are updated to 6.3. - FlashInfer and Triton Updates: The FlashInfer wheel URL in
docker/Dockerfile
has been updated to reflect PyTorch 2.8, and the Triton version inrequirements/test.txt
has been bumped from 3.3.0 to 3.4.0.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request updates PyTorch to version 2.8.0 and its related dependencies like torchvision, torchaudio, and Triton. The changes are mostly version bumps in requirement files and configuration files to use the PyTorch test package index. The changes look consistent with the goal of the PR. I've found one minor issue related to redundant configuration that could be improved for better maintainability.
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: Huy Do <[email protected]>
Signed-off-by: Huy Do <[email protected]>
Signed-off-by: Huy Do <[email protected]>
Signed-off-by: Huy Do <[email protected]>
Signed-off-by: Huy Do <[email protected]>
Looks we are good now? any new blockers? |
Yes, this is ready to land now if you could stamp it. There is no other blocker. We have ironed out all the known issues. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the great work!
This introduced a new deprecation warning for me:
Should we remove the environment variable from vllm or is there still a need to support older versions of PyTorch? vllm/vllm/v1/worker/gpu_worker.py Lines 157 to 163 in 1cf3753
|
This PR is causing the CI failure in Basic Models test: https://buildkite.com/vllm/ci/builds/28843/steps/canvas?jid=0198f5cd-d810-42bf-8560-c5ef36e6898c |
So my understanding of this is:
If we want to unblock CI signals now, we could just skip the single test |
Perhaps a rebase should have been done first? |
Let's try to forward fix these issues if possible. I could take a look at the Docker image build later after trying to bisect the basic model test failure |
#23897 should fix this failure (I verified this locally), but root cause needs to be investigated more |
PR to update the release pipeline #23960 to close the loop. It also fixes the arm64 build issue |
Hi! Are there any nighlies or release v0.10.1.1 which build against pytorch 2.8.0? E.g. v0.10.1.1 was out already after 2.8.0 was released Maybe are there 2.8.0-nightlies as suggested at https://docs.vllm.ai/en/v0.5.5/getting_started/installation.html#install-with-pip? |
@vadimkantorov yes the nightlies from now on should include PyTorch 2.8 |
Are nightlies deleted over time? I'm still struggling to figure out a URL - the docs should provide concrete URL examples, not just templates using COMMIT and VERSION Could you please advise what the URL is or where to find all currently available/published nightly wheel URLs? Thanks :) |
If you just want the latest nightly follow the instructions at https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html#install-the-latest-code (the docs link you provided is ancient) |
I'd like to get the direct wheel link (e.g. for placing into uv pyproject), this readme just shows pip install command, and another readme shows a template for a link, but no concrete final example (otherwise hard to grasp the correct version format: is it including dev+commit or not? Commit in short or long form?):
And also - how long does such a wheel kept online? |
Sorry, I don't know the answer to that question |
You can use this direct URL: wget https://vllm-wheels.s3.amazonaws.com/nightly/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl wei:~$ pip list |grep vllm pipdeptree -p vllm |grep torch You can also: unzip vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl (wheel unpack does not seem to work) |
Essential Elements of an Effective PR Description Checklist
Purpose
Update vLLM to PyTorch 2.8.0 now that it has been release
Test Plan
CI
Test Result
There are some failures, I'm trying to evaluate each one to confirm that they are existing failures from main.
TP_SIZE=1 DP_SIZE=2 pytest -v -s v1/test_async_llm_dp.py
is passing locally on my local H100TP_SIZE=2 DP_SIZE=2 pytest -v -s v1/test_async_llm_dp.py
is also passing locallymamba_ssm
package (probably after the recent 2.7.1 update), for example https://buildkite.com/vllm/ci/builds/26325#019887d6-fd23-447d-8e2c-067a04a33021/200-3499The benchmark results look normal.