Skip to content

Conversation

xuebwang
Copy link

@xuebwang xuebwang commented Jul 28, 2025

Essential Elements of an Effective PR Description Checklist

  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

Purpose

Error:

 File "/home/xuebwang/vllm-project/vllm/vllm/model_executor/models/llama4.py", line 617, in permute
    return w.view(n_heads, attn_in // n_heads // 2, 2,
RuntimeError: shape '[8, 64, 2, 5120]' is invalid for input of size 2621440

Model configuration:

  • MXFP4 quantization on Llama4 scout model using amd-quark.
  • This issue happens on layers language_model.model.layers.*.self_attn.q_proj and language_model.model.layers.*.self_attn.k_proj.

Root cause analysis:

  • For MXFP4 quantization, the exported weights are packed into uint8 in column wise compression, i.e., from [M, N] to [M, N/2]. Therefore, the weight w is in shape of [attn_in, attn_out//2].

  • On the other hand, the permutation for qk weight is doing something like swap adjacent rows. The data order of the columns given a row would not be broken.

  • Therefore, one can safely and simply half the attn_out dimension as attn_out = attn_out // 2.

Test Plan

Test Result

(Optional) Documentation Update

@xuebwang xuebwang marked this pull request as draft July 28, 2025 06:27
@mergify mergify bot added the llama Related to Llama models label Jul 28, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This PR fixes a weight loading issue for llama4 with mxfp4 packed weights. The change correctly adjusts the output dimension for packed tensors. My review includes a suggestion to make the check for packed weights more robust by using the quantization configuration, which will prevent potential issues with other 8-bit quantization schemes.

Comment on lines +567 to +568
if w.dtype in [torch.uint8, torch.int8]:
attn_out = attn_out // 2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

While this fix is likely correct for the intended mxfp4 case, relying solely on w.dtype is a bit fragile. It assumes that any uint8 or int8 weights encountered here are packed 4-bit tensors.

To make this more robust and prevent potential issues if other 8-bit quantization schemes are used in the future, I suggest explicitly checking for the mxfp4 quantization configuration. This makes the intention clearer and safer.

I'm assuming the quantization name is 'mxfp4' based on the PR title. Please verify and adjust if needed.

if (self.quant_config and self.quant_config.get_name() == "mxfp4"
                and w.dtype in [torch.uint8, torch.int8]):
    attn_out = attn_out // 2

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@nvpohanh
Copy link
Contributor

this should be fixed in my #21499

Comment on lines +567 to +568
if w.dtype in [torch.uint8, torch.int8]:
attn_out = attn_out // 2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree with gemini, this is not very robust.

e.g. wouldn't

weight = ModelWeightParameter(data=torch.empty(
sum(output_partition_sizes),
input_size_per_partition,
dtype=torch.int8),
input_dim=1,
output_dim=0,
weight_loader=weight_loader)
or
weight = ModelWeightParameter(data=torch.empty(
sum(output_partition_sizes),
input_size_per_partition,
dtype=torch.int8),
fail here?

shouldn't there be a pack factor defined for all quantization methods, being used here?

Copy link

mergify bot commented Jul 31, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @xuebwang.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Jul 31, 2025
@nvpohanh
Copy link
Contributor

nvpohanh commented Aug 1, 2025

this should already by fixed by #21499

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
llama Related to Llama models needs-rebase
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants