Skip to content

Conversation

@lilinsiman
Copy link
Contributor

@lilinsiman lilinsiman commented Oct 30, 2025

What this PR does / why we need it?

add new test model for aclgraph single_request

Does this PR introduce any user-facing change?

no

How was this patch tested?

ut

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds a new test model, vllm-ascend/DeepSeek-V2-Lite-W8A8, to the single_request_aclgraph end-to-end test. My review identifies a critical issue with how server arguments are constructed for this new model, which would cause the test to fail. I've provided a code suggestion to fix the bug and refactor the code for better readability and maintainability.

Comment on lines 55 to 67
if model == "vllm-ascend/DeepSeek-V2-Lite-W8A8":
server_args = [
"--no-enable-prefix-caching", "--tensor-parallel-size", "1",
"--data-parallel-size",
"--data-parallel-size", "quantization", "ascend",
str(dp_size), "--port",
str(port), "--trust-remote-code", "--gpu-memory-utilization", "0.9"
]
else:
server_args = [
"--no-enable-prefix-caching", "--tensor-parallel-size", "1",
"--data-parallel-size",
str(dp_size), "--port",
str(port), "--trust-remote-code", "--gpu-memory-utilization", "0.9"
]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The server arguments for the new model vllm-ascend/DeepSeek-V2-Lite-W8A8 are incorrect. The arguments "quantization" and "ascend" are misplaced and will cause the argument parser to fail. They should be passed as "--quantization", "ascend".

Additionally, there is significant code duplication between the if and else blocks. This can be refactored to improve readability and maintainability by defining the common arguments first and then conditionally adding the model-specific ones.

Suggested change
if model == "vllm-ascend/DeepSeek-V2-Lite-W8A8":
server_args = [
"--no-enable-prefix-caching", "--tensor-parallel-size", "1",
"--data-parallel-size",
"--data-parallel-size", "quantization", "ascend",
str(dp_size), "--port",
str(port), "--trust-remote-code", "--gpu-memory-utilization", "0.9"
]
else:
server_args = [
"--no-enable-prefix-caching", "--tensor-parallel-size", "1",
"--data-parallel-size",
str(dp_size), "--port",
str(port), "--trust-remote-code", "--gpu-memory-utilization", "0.9"
]
server_args = [
"--no-enable-prefix-caching", "--tensor-parallel-size", "1",
"--data-parallel-size", str(dp_size),
"--port", str(port),
"--trust-remote-code", "--gpu-memory-utilization", "0.9"
]
if model == "vllm-ascend/DeepSeek-V2-Lite-W8A8":
server_args.extend(["--quantization", "ascend"])

@MengqingCao MengqingCao added ready read for review ready-for-test start test by label for PR labels Oct 30, 2025
@lilinsiman lilinsiman force-pushed the aclgraph_single branch 4 times, most recently from 9e814d7 to 27aef0b Compare October 30, 2025 11:48
@yiz-liu yiz-liu merged commit 1f486b2 into vllm-project:main Oct 31, 2025
22 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

module:tests ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants