Skip to content

Conversation

@lilinsiman
Copy link
Contributor

What this PR does / why we need it?

add new test case for aclgraph capture and replay v0.11.0

Does this PR introduce any user-facing change?

no

How was this patch tested?

ut

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds a new end-to-end test for aclgraph capture and replay with a data parallelism size of 2. The test correctly uses multiprocessing to simulate a multi-rank environment and patches torch.npu.NPUGraph methods to verify the number of captures and replays. However, I've identified a critical issue with a long sleep at the end of the test, which will severely impact test suite performance, and a high-severity issue regarding the use of magic numbers in assertions, which harms maintainability. Please address these points to improve the test's quality and reliability.

f"Replay count mismatch. Expected: {expected_total_replay}, Got: {actual_replay}"
)
os.environ["VLLM_WORKER_MULTIPROC_METHOD"] = 'spawn'
sleep(600)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

A 10-minute sleep (sleep(600)) at the end of a test is a major concern. It will dramatically increase the execution time of the test suite. This is often a workaround for issues with resource cleanup that are not being handled correctly. Please investigate the root cause that necessitates this long delay and implement a proper fix. If a delay is truly unavoidable, it must be justified with a detailed comment, and a more robust synchronization mechanism should be preferred over a fixed sleep.

Suggested change
sleep(600)
# FIXME: This long sleep is a workaround and should be removed after fixing the underlying resource cleanup issue.

Comment on lines +153 to +155
max_num_batch_sizes = math.floor(
(1800 - num_comm_groups * 40) / num_acl_graphs /
(1 + num_comm_groups * 2))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The calculation for max_num_batch_sizes uses several magic numbers (1800, 40, 2), which makes the test difficult to understand and maintain. If the underlying logic for this calculation changes in the future, this test will fail without a clear indication of the root cause. Please replace these magic numbers with named constants. This will greatly improve the readability and maintainability of the test. For example:

    # Constants from Ascend backend for max_num_batch_sizes calculation.
    # Please verify and document their exact meaning.
    ACL_GRAPH_RESOURCE_LIMIT = 1800
    RESOURCE_COST_PER_COMM_GROUP = 40
    REPLAY_OVERHEAD_FACTOR = 2
    max_num_batch_sizes = math.floor(
        (ACL_GRAPH_RESOURCE_LIMIT - num_comm_groups * RESOURCE_COST_PER_COMM_GROUP) /
        num_acl_graphs / (1 + num_comm_groups * REPLAY_OVERHEAD_FACTOR))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant