Skip to content

Conversation

LiTingyu1997
Copy link
Contributor

What does this PR do?

Fixes # (issue)

Enlarged the bf16 threshold of Hubert, torchcpu don't support bf16

Fixes # (issue)

Adds # (feature)

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline?
  • Did you make sure to update the documentation with your changes? E.g. record bug fixes or new features in What's New. Here are the
    documentation guidelines
  • Did you build and run the code without any errors?
  • Did you report the running environment (NPU type/MS version) and performance in the doc? (better record it for data loading, model inference, or training tasks)
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@xxx

@LiTingyu1997 LiTingyu1997 requested a review from vigo999 as a code owner October 11, 2025 01:37
Copy link
Contributor

Summary of Changes

Hello @LiTingyu1997, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses and resolves recurring unit test failures for the Hubert model by adjusting the bf16 precision threshold. The modification ensures that tests pass reliably, particularly in environments where bf16 operations might have slightly different precision characteristics, such as on CPU, preventing erroneous test failures. The change specifically targets issues encountered when torchcpu does not fully support bf16 with the previous strict threshold.

Highlights

  • Hubert Test Threshold Adjustment: The bf16 precision threshold for Hubert model unit tests has been increased from 5e-3 to 5e-2. This change addresses fast unit test errors, particularly in environments where bf16 operations might have different precision characteristics.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a unit test failure for the Hubert model by increasing the precision tolerance for bfloat16 computations. The threshold has been raised from 5e-3 to 5e-2. While this resolves the test failure, such a significant increase in the tolerance might mask potential precision regressions in the future. I have added a review comment suggesting the addition of an inline comment to document the reason for this high threshold, which will improve code clarity and maintainability.

from tests.transformers_tests.models.modeling_common import floats_numpy, random_attention_mask

DTYPE_AND_THRESHOLDS = {"fp32": 5e-4, "fp16": 5e-3, "bf16": 5e-3}
DTYPE_AND_THRESHOLDS = {"fp32": 5e-4, "fp16": 5e-3, "bf16": 5e-2}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Increasing the bf16 threshold by a factor of 10 to 5e-2 is a significant change that could mask future precision regressions. To maintain test quality, it's important to keep thresholds as tight as possible.

If this large threshold is necessary due to limitations with bfloat16 on CPU, please add an inline comment to explain the reason. This provides valuable context for future developers and justifies the large value.

Suggested change
DTYPE_AND_THRESHOLDS = {"fp32": 5e-4, "fp16": 5e-3, "bf16": 5e-2}
DTYPE_AND_THRESHOLDS = {"fp32": 5e-4, "fp16": 5e-3, "bf16": 5e-2} # Increased for bf16 on CPU due to precision issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant