Skip to content

[CPU][float8] Add QEmbeddingbag kernel #2686

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 9 commits into
base: main
Choose a base branch
from

Conversation

shiyang-weng
Copy link
Contributor

@shiyang-weng shiyang-weng commented Aug 5, 2025

Implemented FP8 QEmbeddingBag on CPU, currently supporting:

  • include_last_offset=True
  • mode="sum"

Next steps

  1. expand supported modes.
  2. Use fp8 instructions instead

Copy link

pytorch-bot bot commented Aug 5, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2686

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 0e10992 with merge base 7dbc816 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@shiyang-weng shiyang-weng marked this pull request as draft August 5, 2025 01:37
@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Aug 5, 2025
"CPU" not in torch._C._dispatch_dump("torchao::qembeddingbag"),
reason="cpp kernels not built",
)
def test_embeddingbag_cpu(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the test should be added here I think: https://github.com/pytorch/ao/blob/main/test/test_ops.py

@shiyang-weng

This comment was marked as outdated.

Copy link

pytorch-bot bot commented Aug 7, 2025

❌ 🤖 pytorchbot command failed:

@pytorchbot: error: argument command: invalid choice: 'topic: new feature' (choose from 'merge', 'revert', 'rebase', 'label', 'drci', 'cherry-pick')

usage: @pytorchbot [-h] {merge,revert,rebase,label,drci,cherry-pick} ...

Try @pytorchbot --help for more info.

@shiyang-weng
Copy link
Contributor Author

@pytorchbot label "topic: new feature"

@pytorch-bot pytorch-bot bot added the topic: new feature Use this tag if this PR adds a new feature label Aug 7, 2025
@shiyang-weng shiyang-weng marked this pull request as ready for review August 7, 2025 02:43
Copy link
Collaborator

@Xia-Weiwen Xia-Weiwen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Have you run some benchmark to ensure it's not too slow?

@Xia-Weiwen Xia-Weiwen requested a review from jerryzh168 August 11, 2025 02:10
@shiyang-weng
Copy link
Contributor Author

@jerryzh168 Could you help review this pr

@@ -70,6 +70,9 @@
lib.define(
"da8w4_linear_cpu(Tensor input, Tensor input_scales, Tensor input_qzeros, Tensor weight, Tensor weight_scales, Tensor weight_qzeros, Tensor compensation, Tensor? bias, ScalarType output_dtype) -> Tensor"
)
lib.define(
"qembeddingbag(Tensor qweight, Tensor indices, Tensor offsets, Tensor weight_scale, float o_scale, int mode, bool include_last_offset) -> Tensor"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jerryzh168 Thanks for reviewing. Yes, I think so, except that the implementation in this PR has limited functionality so far.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This operator is used for inference. So I did not add any parameters related to the gradient, including scale_grad_by_freq, sparse, per_sample_weights, padding_idx.

Copy link
Contributor

@jerryzh168 jerryzh168 Aug 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should add this to pytorch directly if that's the case, float8 is a native dtype in pytorch, so I think it makes most of the sense to just add the functionality there, we can error out in the op if some arg combination is not supported or invalid for float8

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Intel's platform has fp8 instructions. When we are ready, we hope to update this kernel based on fp8 instructions. As far as I know, the latest GCC is required. Is it difficult to support in PyTorch?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure, can you open an issue for this in pytorch/pytorch?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: new feature Use this tag if this PR adds a new feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants