-
Notifications
You must be signed in to change notification settings - Fork 466
[Disagg][Perf] Use NPU event sync instead of blocking tolist to avoid unintentional copy ops blocking across different NPU streams, improving disagg TTIT/TTFT #2788
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from 15 commits
Commits
Show all changes
16 commits
Select commit
Hold shift + click to select a range
b6c5ef9
use event sync
jesse996 9816a36
add test
jesse996 f14a98b
update test
jesse996 beabae4
fix test
jesse996 3da83fe
fix test
jesse996 1695f5f
fix test
jesse996 c483b20
fix test
jesse996 ed0b72f
fix test
jesse996 1f9cb35
fix test
jesse996 9c8fb4c
fix test
jesse996 5be58d5
Merge branch 'main' into event-sync
jesse996 598c896
update test
jesse996 674be75
update test
jesse996 4588d12
Merge branch 'main' into event-sync
jesse996 d81f665
update test
jesse996 dd4c177
update comment
jesse996 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -227,6 +227,7 @@ def __init__(self, vllm_config: VllmConfig, device: torch.device): | |
self.block_size = vllm_config.cache_config.block_size | ||
self.max_num_blocks_per_req = cdiv(self.model_config.max_model_len, | ||
self.block_size) | ||
self.max_model_len = self.model_config.max_model_len | ||
self.max_num_tokens = self.scheduler_config.max_num_batched_tokens | ||
decode_max_num_seqs = getattr(self.scheduler_config, | ||
'decode_max_num_seqs', 0) | ||
|
@@ -401,6 +402,12 @@ def __init__(self, vllm_config: VllmConfig, device: torch.device): | |
# Cached outputs. | ||
self._draft_token_ids: Optional[Union[list[list[int]], | ||
torch.Tensor]] = None | ||
self.transfer_event = torch_npu.npu.Event() | ||
self.sampled_token_ids_pinned_cpu = torch.empty( | ||
(self.max_model_len, 1), | ||
dtype=torch.int64, | ||
device="cpu", | ||
pin_memory=True) | ||
|
||
# NOTE: we need to use `in_profile_run` to determine whether `enable_force_load_balance` is True | ||
self.in_profile_run = False | ||
|
@@ -1906,7 +1913,7 @@ def execute_model( | |
max_gen_len = sampled_token_ids.shape[-1] | ||
if max_gen_len == 1: | ||
# No spec decode tokens. | ||
valid_sampled_token_ids = sampled_token_ids.tolist() | ||
valid_sampled_token_ids = self._to_list(sampled_token_ids) | ||
else: | ||
# Includes spec decode tokens. | ||
valid_sampled_token_ids = self.rejection_sampler.parse_output( | ||
|
@@ -3054,3 +3061,18 @@ def get_supported_pooling_tasks(self): | |
|
||
def _build_drafter_prepare_inputs_torchair_param(self): | ||
return False | ||
|
||
def _to_list(self, sampled_token_ids: torch.Tensor) -> list[list[int]]: | ||
# This is a short term mitigation for issue mentioned in | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. can you rewrite the comment to ascend case? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. updated |
||
# https://github.com/vllm-project/vllm/issues/22754. | ||
# `tolist` would trigger a cuda wise stream sync, which | ||
# would block other copy ops from other cuda streams. | ||
# A cuda event sync would avoid such a situation. Since | ||
# this is in the critical path of every single model | ||
# forward loop, this has caused perf issue for a disagg | ||
# setup. | ||
pinned = self.sampled_token_ids_pinned_cpu[:sampled_token_ids.shape[0]] | ||
pinned.copy_(sampled_token_ids, non_blocking=True) | ||
self.transfer_event.record() | ||
self.transfer_event.synchronize() | ||
return pinned.tolist() |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.