-
-
Notifications
You must be signed in to change notification settings - Fork 10k
[Refactor] Remove moe_align_block_size_triton
#21335
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Refactor] Remove moe_align_block_size_triton
#21335
Conversation
Signed-off-by: yewentao256 <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request removes the Triton implementation of moe_align_block_size
as the vLLM custom operation is now faster. The changes remove the Triton kernels and update the benchmark script. The code removal is clean and the rationale is supported by the provided performance data.
line_vals=["vllm", "triton"], # "triton" | ||
line_names=["VLLM", "Triton"], # "Triton" | ||
line_vals=["vllm"], | ||
line_names=["vLLM"], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we move the kernel into the benchmark script? It seems not that useful to have a benchmark as just one kernel
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I leave it here because I think someone else may reuse this script to compare. eg. new_vllm vs, old_vllm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think moving the kernel into the benchmark doesn't help because we already win the triton kernel for every shape, so we don't need to maintain it. In the future when new kernel comes, just compare with the current version would be good, what's your thought?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CC @mgoin
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, that is fair enough. As long as we have a naive implementation to unit test against, it should be okay
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, someone else developed a torch version for unit test. But it is quite slow, so I don't choose to compare it for benchmark.
Signed-off-by: yewentao256 <[email protected]>
Signed-off-by: yewentao256 <[email protected]>
Signed-off-by: yewentao256 <[email protected]> Signed-off-by: shuw <[email protected]>
Signed-off-by: yewentao256 <[email protected]> Signed-off-by: x22x22 <[email protected]>
Signed-off-by: yewentao256 <[email protected]>
Signed-off-by: yewentao256 <[email protected]>
Signed-off-by: yewentao256 <[email protected]> Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: yewentao256 <[email protected]> Signed-off-by: Paul Pak <[email protected]>
Signed-off-by: yewentao256 <[email protected]>
Signed-off-by: yewentao256 <[email protected]> Signed-off-by: Boyuan Feng <[email protected]>
Signed-off-by: yewentao256 <[email protected]> Signed-off-by: Diego-Castan <[email protected]>
Signed-off-by: yewentao256 <[email protected]>
Signed-off-by: yewentao256 <[email protected]>
Signed-off-by: yewentao256 <[email protected]>
Purpose
moe_align_block_size_triton
was taken from SGL, used to benchmark themoe_align_block_size
kernelBut now we already beats triton version in every shape, so we can safely remove this as no other references in the code as well.