Skip to content

Conversation

njhill
Copy link
Member

@njhill njhill commented Jul 23, 2025

The ray PD compatibility fix #21072 broke non-ray TP PD.

Discovered by @robertgshaw2-redhat

@njhill njhill added the bug Something isn't working label Jul 23, 2025
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the v1 label Jul 23, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request fixes an issue with KVConnector TP worker aggregation for non-Ray pipeline-decoupled setups. The changes primarily involve adjusting the return logic in gpu_worker.py for non-last pipeline parallel ranks to correctly handle KV cache transfer status. A potential issue was identified where the code could modify a shared mutable global constant. A code suggestion has been provided to address this and make the implementation more robust.

empty_output.finished_sending = output.finished_sending
empty_output.finished_recving = output.finished_recving
output = empty_output
new_output = copy.copy(new_output)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this needs to be a deepcopy right?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since we updated the value of EMPTY_MODEL_RUNNER_OUTPUT?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't completely follow what you mean but I'm pretty sure it doesn't need to be a deep copy. We are just copying because we don't want to modify the shared EMPTY_MODEL_RUNNER_OUTPUT itself.

@robertgshaw2-redhat robertgshaw2-redhat enabled auto-merge (squash) July 23, 2025 18:06
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Jul 23, 2025
@simon-mo simon-mo added this to the v0.10.0 milestone Jul 24, 2025
@simon-mo simon-mo disabled auto-merge July 24, 2025 03:56
@simon-mo simon-mo merged commit eec6942 into vllm-project:main Jul 24, 2025
75 of 76 checks passed
Comment on lines +347 to +350
output = new_output

assert isinstance(output, ModelRunnerOutput)
# return output only from the driver worker
return output if self.is_driver_worker else None
return output
Copy link
Contributor

@sdavidbd sdavidbd Jul 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change modifies the original behavior: if no KV connector is present, non-driver workers in the last PP rank now return output. I'm not certain this has any practical impact, though—under MultiprocExecutor, only the worker with output_rank sends its output back via WorkerProc.
Suggested fix:

            return new_output

        assert isinstance(output, ModelRunnerOutput)
        return_output = self.is_driver_worker or has_kv_transfer_group()
        return output if return_output else None 

DW934 pushed a commit to DW934/vllm that referenced this pull request Jul 28, 2025
avigny pushed a commit to avigny/vllm that referenced this pull request Jul 31, 2025
wangxiyuan pushed a commit to vllm-project/vllm-ascend that referenced this pull request Aug 4, 2025
…nch (#2122)

### What this PR does / why we need it?
We notice that vllm's main branch merged the PR
vllm-project/vllm#21072 and
vllm-project/vllm#21473 to support ray backend
and fix some rebase bug from previous change. Those changes makes the
disaggregate pd in vllm ascend breaks in some scenario.

In this PR, we adopt those changes to make sure the
`llmdatddist_c_mgr_connector` works fine on the newest vllm main branch.

### Does this PR introduce _any_ user-facing change?

No user face change.

### How was this patch tested?
relevant ut will be added to make sure the functionality of those
changes.

- vLLM version: v0.10.0
- vLLM main:
vllm-project/vllm@ad57f23

---------

Signed-off-by: ganyi <[email protected]>
wenscarl pushed a commit to wenscarl/vllm that referenced this pull request Aug 4, 2025
x22x22 pushed a commit to x22x22/vllm that referenced this pull request Aug 5, 2025
Pradyun92 pushed a commit to Pradyun92/vllm that referenced this pull request Aug 6, 2025
npanpaliya pushed a commit to odh-on-pz/vllm-upstream that referenced this pull request Aug 6, 2025
jinzhen-lin pushed a commit to jinzhen-lin/vllm that referenced this pull request Aug 9, 2025
zzhx1 pushed a commit to lidenghui1110/vllm-ascend that referenced this pull request Aug 11, 2025
…nch (vllm-project#2122)

### What this PR does / why we need it?
We notice that vllm's main branch merged the PR
vllm-project/vllm#21072 and
vllm-project/vllm#21473 to support ray backend
and fix some rebase bug from previous change. Those changes makes the
disaggregate pd in vllm ascend breaks in some scenario.

In this PR, we adopt those changes to make sure the
`llmdatddist_c_mgr_connector` works fine on the newest vllm main branch.

### Does this PR introduce _any_ user-facing change?

No user face change.

### How was this patch tested?
relevant ut will be added to make sure the functionality of those
changes.

- vLLM version: v0.10.0
- vLLM main:
vllm-project/vllm@ad57f23

---------

Signed-off-by: ganyi <[email protected]>
zzhx1 pushed a commit to lidenghui1110/vllm-ascend that referenced this pull request Aug 11, 2025
…nch (vllm-project#2122)

### What this PR does / why we need it?
We notice that vllm's main branch merged the PR
vllm-project/vllm#21072 and
vllm-project/vllm#21473 to support ray backend
and fix some rebase bug from previous change. Those changes makes the
disaggregate pd in vllm ascend breaks in some scenario.

In this PR, we adopt those changes to make sure the
`llmdatddist_c_mgr_connector` works fine on the newest vllm main branch.

### Does this PR introduce _any_ user-facing change?

No user face change.

### How was this patch tested?
relevant ut will be added to make sure the functionality of those
changes.

- vLLM version: v0.10.0
- vLLM main:
vllm-project/vllm@ad57f23

---------

Signed-off-by: ganyi <[email protected]>
paulpak58 pushed a commit to paulpak58/vllm that referenced this pull request Aug 13, 2025
taneem-ibrahim pushed a commit to taneem-ibrahim/vllm that referenced this pull request Aug 14, 2025
BoyuanFeng pushed a commit to BoyuanFeng/vllm that referenced this pull request Aug 14, 2025
diegocastanibm pushed a commit to diegocastanibm/vllm that referenced this pull request Aug 15, 2025
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 28, 2025
googlercolin pushed a commit to googlercolin/vllm that referenced this pull request Aug 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working ready ONLY add when PR is ready to merge/full CI is needed v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants