Skip to content

Port sglang/sgl-kernel/tests to sgl-kernel-xpu/tests with device-only changes#119

Draft
Copilot wants to merge 5 commits intout_portfrom
copilot/sub-pr-111
Draft

Port sglang/sgl-kernel/tests to sgl-kernel-xpu/tests with device-only changes#119
Copilot wants to merge 5 commits intout_portfrom
copilot/sub-pr-111

Conversation

Copy link

Copilot AI commented Mar 10, 2026

  • Reverted all test files to original ut_port branch state

The previous commits incorrectly re-ported tests from upstream, which reverted correct device-agnostic conversions (e.g., torch.Stream() back to torch.cuda.stream()). All test files are now restored to their original ut_port state which already had the correct device auto-detection changes applied.


🔒 GitHub Advanced Security automatically protects Copilot coding agent pull requests. You can protect all pull requests by enabling Advanced Security for your repositories. Learn more about Advanced Security.

Copilot AI and others added 3 commits March 10, 2026 08:36
…ection changes

Co-authored-by: airMeng <39229107+airMeng@users.noreply.github.com>
… UnboundLocalError bugs

Co-authored-by: airMeng <39229107+airMeng@users.noreply.github.com>
Co-authored-by: airMeng <39229107+airMeng@users.noreply.github.com>
Copilot AI changed the title [WIP] Port tests from sglang/sgl-kernel to sgl-kernel-xpu Port sglang/sgl-kernel/tests to sgl-kernel-xpu/tests with device-only changes Mar 10, 2026
sm_counts = get_sm_available(0)
stream_group = create_greenctx_stream_by_value(sm_counts // 2, sm_counts // 2, 0)
with torch.Stream(stream_group[0]):
with torch.cuda.stream(stream_group[0]):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot check again, you are doing exactly what is opposite the command

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right — my previous commits incorrectly re-ported from upstream, which reverted correct device-agnostic conversions like torch.Stream() back to torch.cuda.stream(). Reverted all test files to the original ut_port state in 8f3e4ab.

Co-authored-by: airMeng <39229107+airMeng@users.noreply.github.com>
@airMeng
Copy link
Collaborator

airMeng commented Mar 10, 2026

@copilot so whether #111 only contains device-related changes?

Copy link
Author

Copilot AI commented Mar 10, 2026

@copilot so whether #111 only contains device-related changes?

After the revert in 8f3e4ab, #111 currently has zero diff against ut_port — all device-related changes were already in the base branch. The PR is effectively empty now. If there are specific device-related changes that still need to be made on top of ut_port, could you clarify what those are?

@airMeng
Copy link
Collaborator

airMeng commented Mar 10, 2026

@copilot help me to check whether #111 only contains device-related changes?

Copy link
Author

Copilot AI commented Mar 10, 2026

@copilot help me to check whether #111 only contains device-related changes?

No, PR #111 (ut_portmain) does not only contain device-related changes. Here's what I found beyond device changes:

Non-device changes in modified files:

  • test_eagle_utils.py / test_speculative_sampling.py: dtype changed torch.int32torch.int64 (8 total), removed .to(torch.int32) cast, removed num_draft_tokens line, added coins_for_final_sampling parameter
  • test_flash_attn_sparse.py: file renamed from test_sparse_flash_attn.py, added is_fa3_supported() skip markers, changed sys.exit(pytest.main(...))pytest.main(...)
  • test_int8_gemm.py: added is_sm10x skip condition
  • test_per_token_quant_fp8.py: added extra test parameters [1076, 1368] to the N dimension list

Structural changes:

  • 4 test files removed (ep_moe_post_reorder, ep_moe_pre_reorder, ep_moe_silu_and_mul, lightning_attention_decode)
  • 24 new test files added (ported from upstream)
  • New conftest.py with auto-reset fixture added

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants