-
Notifications
You must be signed in to change notification settings - Fork 88
fix(transformers): Fix Helium UT errors #1220
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @wtomin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request addresses and resolves issues with Helium unit tests that were failing under the transformers==4.50.0
and mindone
environments. The changes ensure that all unit tests now pass, specifically by enforcing pynative mode for testing and making necessary adjustments to the Helium model testing configuration. Additionally, it confirms that existing generation examples continue to function correctly after these fixes.
Highlights
- Test Compatibility Fix: The core issue addressed is the failure of Helium unit tests when run with
transformers==4.50.0
andmindone
. This PR ensures these tests now pass. - Pynative Mode Enforcement: The test suite has been configured to run exclusively in pynative mode for Helium tests, with a future goal to re-enable graph mode support.
- Test Configuration Updates: Adjustments were made to the
HeliumModelTester
configuration, specifically reducing the number of attention heads and adding ahead_dim
parameter, likely to resolve test failures or optimize test execution.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request aims to fix unit tests for the Helium model, likely to ensure compatibility with an updated version of the transformers library. The changes involve an API update in the model implementation and several adjustments to the test configuration. While most changes seem appropriate for fixing the tests, one modification reduces test coverage by disabling the test for Grouped-Query Attention (GQA). My review highlights this issue to ensure that this feature is not left untested if it's still supported.
hidden_size=32, | ||
num_hidden_layers=2, | ||
num_attention_heads=4, | ||
num_attention_heads=2, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By setting num_attention_heads
to be equal to num_key_value_heads
, this test case now only covers Multi-Head Attention (MHA), where num_key_value_groups
is 1. The previous configuration tested Grouped-Query Attention (GQA) with num_key_value_groups = 2
.
While this change may be necessary to fix the immediate test failure, it results in the GQA implementation (specifically the n_rep > 1
path in the repeat_kv
function) no longer being covered by this unit test. If GQA is a feature that should still be supported, it would be beneficial to add a separate test case or configuration to ensure its functionality is verified.
else: | ||
min_dtype = _DTYPE_2_MIN[dtype] | ||
causal_mask = mint.full( | ||
causal_mask = ms.ops.full( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add comment to explain why use ops
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed.
same error with mindspore 2.7? |
Just tested. If using |
Fix Helium UT test under transformers==4.50.0 and mindone;
num_attention_heads
andnum_key_value_heads
must be the same. Therefore, changednum_attention_heads
from 4 to 2.It seems that
mint.full
is not very robust. After changingmint.full
tomindspore.ops.full
, this error disappeared.-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html ================================================================ 3 passed, 58 warnings in 29.73s ===================================
Given the text input
What is your favorite condiment?
: