Conversation
Signed-off-by: Kunshang Ji <kunshang.ji@intel.com>
Signed-off-by: Kunshang Ji <kunshang.ji@intel.com>
Signed-off-by: Kunshang Ji <jikunshang95@gmail.com>
There was a problem hiding this comment.
Pull request overview
Updates the per-commit wheel build workflow to run inside PyTorch’s manylinux2_28 XPU builder container, aiming to produce wheels in an environment closer to PyTorch’s official manywheel build setup.
Changes:
- Switch the wheel build job container image to
pytorch/manylinux2_28-builder:xpu-main. - Source Intel oneAPI environment before building, and install build requirements with
pip. - Post-process the produced wheel filename to use a
manylinux_2_28_x86_64platform tag.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| github.event.workflow_run.event == 'push') | ||
| container: | ||
| image: localhost:5000/xpu-kernel-ci-image:latest | ||
| image: pytorch/manylinux2_28-builder:xpu-main |
There was a problem hiding this comment.
The workflow uses a moving container tag (pytorch/manylinux2_28-builder:xpu-main). This makes builds non-reproducible and can break unexpectedly when the image changes. Consider pinning the image by digest (or a versioned tag) so the wheel build environment is stable over time.
| image: pytorch/manylinux2_28-builder:xpu-main | |
| image: pytorch/manylinux2_28-builder@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef |
| python3 setup.py bdist_wheel --dist-dir=dist --py-limited-api=cp38 | ||
| export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/conda/lib/python3.11/site-packages/torch/lib/ |
There was a problem hiding this comment.
LD_LIBRARY_PATH is exported after setup.py bdist_wheel runs, so it won’t affect compilation/linking during the wheel build. If this path is required for building (e.g., to find Torch shared libs), export it before invoking python3 setup.py ...; otherwise this line is dead code and should be removed to avoid confusion.
| python3 setup.py bdist_wheel --dist-dir=dist --py-limited-api=cp38 | |
| export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/conda/lib/python3.11/site-packages/torch/lib/ | |
| export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/conda/lib/python3.11/site-packages/torch/lib/ | |
| python3 setup.py bdist_wheel --dist-dir=dist --py-limited-api=cp38 |
| python3 setup.py bdist_wheel --dist-dir=dist --py-limited-api=cp38 | ||
| export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/conda/lib/python3.11/site-packages/torch/lib/ | ||
| for f in dist/*-linux_x86_64.whl; do mv "$f" "${f/linux_x86_64/manylinux_2_28_x86_64}"; done |
There was a problem hiding this comment.
Renaming the wheel filename from linux_x86_64 to manylinux_2_28_x86_64 can mislabel the wheel without actually ensuring manylinux compliance (e.g., external shared library dependencies). Instead of mv, use a proper manylinux repair step (typically auditwheel repair, or build with the correct --plat-name and verify policy compliance) so the tag accurately reflects the wheel’s ABI/packaging.
| python3 setup.py bdist_wheel --dist-dir=dist --py-limited-api=cp38 | |
| export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/conda/lib/python3.11/site-packages/torch/lib/ | |
| for f in dist/*-linux_x86_64.whl; do mv "$f" "${f/linux_x86_64/manylinux_2_28_x86_64}"; done | |
| pip install auditwheel | |
| python3 setup.py bdist_wheel --dist-dir=dist --py-limited-api=cp38 | |
| export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/conda/lib/python3.11/site-packages/torch/lib/ | |
| auditwheel repair --plat manylinux_2_28_x86_64 -w dist dist/*-linux_x86_64.whl |
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS ABOVE HAVE BEEN CONSIDERED.
Purpose
use pytorch build wheel docker pytorch/manylinux2_28-builder:xpu-main
https://github.com/pytorch/pytorch/blob/main/.ci/manywheel/build_xpu.sh
igc in this docker image is 2.20.5
Test Plan
CI job pass
https://github.com/vllm-project/vllm-xpu-kernels/actions/runs/22673444106
Test Result
(Optional) Documentation Update
BEFORE SUBMITTING, PLEASE READ https://docs.vllm.ai/en/latest/contributing (anything written below this line will be removed by GitHub Actions)