You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We recommend leveraging `uv` to [automatically select the appropriate PyTorch index at runtime](https://docs.astral.sh/uv/guides/integration/pytorch/#automatic-backend-selection) by inspecting the installed CUDA driver version via `--torch-backend=auto` (or `UV_TORCH_BACKEND=auto`). To select a specific backend (e.g., `cu126`), set `--torch-backend=cu126` (or `UV_TORCH_BACKEND=cu126`). If this doesn't work, try running `uv self update` to update `uv` first.
LLM inference is a fast-evolving field, and the latest code may contain bug fixes, performance improvements, and new features that are not released yet. To allow users to try the latest code without waiting for the next release, vLLM provides wheels for Linux running on a x86 platform with CUDA 12 for every commit since `v0.5.3`.
52
52
53
-
##### Install the latest code using `pip`
54
-
55
-
```bash
56
-
pip install -U vllm \
57
-
--pre \
58
-
--extra-index-url https://wheels.vllm.ai/nightly
59
-
```
60
-
61
-
`--pre` is required for `pip` to consider pre-released versions.
62
-
63
-
Another way to install the latest code is to use `uv`:
64
-
65
53
```bash
66
54
uv pip install -U vllm \
67
55
--torch-backend=auto \
68
56
--extra-index-url https://wheels.vllm.ai/nightly
69
57
```
70
58
71
-
##### Install specific revisions using `pip`
59
+
??? console "pip"
60
+
```bash
61
+
pip install -U vllm \
62
+
--pre \
63
+
--extra-index-url https://wheels.vllm.ai/nightly
64
+
```
72
65
73
-
If you want to access the wheels for previous commits (e.g. to bisect the behavior change, performance regression), due to the limitation of `pip`, you have to specify the full URL of the wheel file by embedding the commit hash in the URL:
66
+
`--pre` is required for `pip`to consider pre-released versions.
74
67
75
-
```bash
76
-
export VLLM_COMMIT=33f460b17a54acb3b6cc0b03f4a17876cff5eafd # use full commit hash from the main branch
Note that the wheels are built with Python 3.8 ABI (see [PEP 425](https://peps.python.org/pep-0425/) for more details about ABI), so **they are compatible with Python 3.8 and later**. The version string in the wheel file name (`1.0.0.dev`) is just a placeholder to have a unified URL for the wheels, the actual versions of wheels are contained in the wheel metadata (the wheels listed in the extra index url have correct versions). Although we don't support Python 3.8 any more (because PyTorch 2.5 dropped support for Python 3.8), the wheels are still built with Python 3.8 ABI to keep the same wheel name as before.
81
-
82
-
##### Install specific revisions using `uv`
68
+
##### Install specific revisions
83
69
84
70
If you want to access the wheels for previous commits (e.g. to bisect the behavior change, performance regression), you can specify the commit hash in the URL:
85
71
@@ -92,17 +78,35 @@ uv pip install vllm \
92
78
93
79
The `uv` approach works for vLLM `v0.6.6` and later and offers an easy-to-remember command. A unique feature of `uv` is that packages in `--extra-index-url` have [higher priority than the default index](https://docs.astral.sh/uv/pip/compatibility/#packages-that-exist-on-multiple-indexes). If the latest public release is `v0.6.6.post1`, `uv`'s behavior allows installing a commit before `v0.6.6.post1` by specifying the `--extra-index-url`. In contrast, `pip` combines packages from `--extra-index-url` and the default index, choosing only the latest version, which makes it difficult to install a development version prior to the released version.
94
80
81
+
??? note "pip"
82
+
If you want to access the wheels for previous commits (e.g. to bisect the behavior change,
83
+
performance regression), due to the limitation of `pip`, you have to specify the full URL of the
84
+
wheel file by embedding the commit hash in the URL:
85
+
86
+
```bash
87
+
export VLLM_COMMIT=33f460b17a54acb3b6cc0b03f4a17876cff5eafd # use full commit hash from the main branch
Note that the wheels are built with Python 3.8 ABI (see [PEP
92
+
425](https://peps.python.org/pep-0425/) for more details about ABI), so **they are compatible
93
+
with Python 3.8 and later**. The version string in the wheel file name (`1.0.0.dev`) is just a
94
+
placeholder to have a unified URL for the wheels, the actual versions of wheels are contained in
95
+
the wheel metadata (the wheels listed in the extra index url have correct versions). Although we
96
+
don't support Python 3.8 any more (because PyTorch 2.5 dropped support for Python 3.8), the
97
+
wheels are still built with Python 3.8 ABI to keep the same wheel name as before.
98
+
95
99
# --8<-- [end:pre-built-wheels]
96
100
# --8<-- [start:build-wheel-from-source]
97
101
98
102
#### Set up using Python-only build (without compilation)
99
103
100
-
If you only need to change Python code, you can build and install vLLM without compilation. Using `pip`'s [`--editable` flag](https://pip.pypa.io/en/stable/topics/local-project-installs/#editable-installs), changes you make to the code will be reflected when you run vLLM:
104
+
If you only need to change Python code, you can build and install vLLM without compilation. Using `uv pip`'s [`--editable` flag](https://docs.astral.sh/uv/pip/packages/#editable-packages), changes you make to the code will be reflected when you run vLLM:
The following environment variables can be set to configure the vLLM `sccache` remote: `SCCACHE_BUCKET=vllm-build-sccache SCCACHE_REGION=us-west-2 SCCACHE_S3_NO_CREDENTIALS=1`. We also recommend setting `SCCACHE_IDLE_TIMEOUT=0`.
153
157
154
158
!!! note "Faster Kernel Development"
155
-
For frequent C++/CUDA kernel changes, after the initial `pip install -e .` setup, consider using the [Incremental Compilation Workflow](../../contributing/incremental_build.md) for significantly faster rebuilds of only the modified kernel code.
159
+
For frequent C++/CUDA kernel changes, after the initial `uv pip install -e .` setup, consider using the [Incremental Compilation Workflow](../../contributing/incremental_build.md) for significantly faster rebuilds of only the modified kernel code.
156
160
157
161
##### Use an existing PyTorch installation
158
162
159
-
There are scenarios where the PyTorch dependency cannot be easily installed via pip, e.g.:
163
+
There are scenarios where the PyTorch dependency cannot be easily installed with `uv`, e.g.:
160
164
161
165
- Building vLLM with PyTorch nightly or a custom PyTorch build.
162
-
- Building vLLM with aarch64 and CUDA (GH200), where the PyTorch wheels are not available on PyPI. Currently, only the PyTorch nightly has wheels for aarch64 with CUDA. You can run `pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu124` to [install PyTorch nightly](https://pytorch.org/get-started/locally/), and then build vLLM on top of it.
166
+
- Building vLLM with aarch64 and CUDA (GH200), where the PyTorch wheels are not available on PyPI. Currently, only the PyTorch nightly has wheels for aarch64 with CUDA. You can run `uv pip install --index-url https://download.pytorch.org/whl/nightly/cu128 torch torchvision torchaudio` to [install PyTorch nightly](https://pytorch.org/get-started/locally/) and then build vLLM on top of it.
163
167
164
168
To build vLLM using an existing PyTorch installation:
@@ -189,7 +193,7 @@ to be run simultaneously, via the environment variable `MAX_JOBS`. For example:
189
193
190
194
```bash
191
195
export MAX_JOBS=6
192
-
pip install -e .
196
+
uv pip install -e .
193
197
```
194
198
195
199
This is especially useful when you are building on less powerful machines. For example, when you use WSL it only [assigns 50% of the total memory by default](https://learn.microsoft.com/en-us/windows/wsl/wsl-config#main-wsl-settings), so using `export MAX_JOBS=1` can avoid compiling multiple files simultaneously and running out of memory.
@@ -228,7 +232,7 @@ Simply disable the `VLLM_TARGET_DEVICE` environment variable before installing:
0 commit comments