-
Notifications
You must be signed in to change notification settings - Fork 224
make transformer fix backward compatible #1794
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: shanjiaz <[email protected]>
👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review. Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @shanjiaz, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces a robust mechanism to handle variations in key-value cache parameter naming within the transformers
library. By dynamically adapting to the expected parameter name, it ensures that the system remains compatible across different library versions, thereby preventing potential runtime errors related to KV cache access.
Highlights
- KV Cache Compatibility: Implemented logic to dynamically determine the correct KV cache parameter name (
past_key_values
orpast_key_value
) based on the module'sforward
signature. - Backward Compatibility: Ensures that the transformer fix remains backward compatible with different versions of the
transformers
library, preventing KV cache errors.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a backward-compatible way to handle the past_key_values
parameter by inspecting the model's forward signature. This is a good improvement for compatibility with different transformers
versions. My review includes a suggestion to optimize the implementation by caching the result of inspect.signature
to avoid performance overhead during calibration, and to clarify a misleading comment.
# Determine which past KV parameter name to use based on module's forward signature | ||
_past_kv_name = ( | ||
"past_key_values" # transformers#39956 | ||
if "past_key_values" in inspect.signature(module.forward).parameters | ||
else "past_key_value" | ||
) | ||
|
||
# Update both parameter names to maintain compatibility | ||
kwargs[_past_kv_name] = kv_cache |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Calling inspect.signature()
on every forward pass can be inefficient, especially during calibration which may involve many iterations. Since the forward signature of a module doesn't change at runtime, this value can be computed once and cached on the module.
Additionally, the comment on line 258, # Update both parameter names to maintain compatibility
, is misleading as only one parameter is being updated.
The suggested code caches the parameter name on the module for performance and removes the confusing comment.
# Determine which past KV parameter name to use based on module's forward signature | |
_past_kv_name = ( | |
"past_key_values" # transformers#39956 | |
if "past_key_values" in inspect.signature(module.forward).parameters | |
else "past_key_value" | |
) | |
# Update both parameter names to maintain compatibility | |
kwargs[_past_kv_name] = kv_cache | |
if not hasattr(module, "_past_kv_name"): | |
# Determine which past KV parameter name to use once and cache it | |
module._past_kv_name = ( | |
"past_key_values" # transformers#39956 | |
if "past_key_values" in inspect.signature(module.forward).parameters | |
else "past_key_value" | |
) | |
kwargs[module._past_kv_name] = kv_cache |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this is run every forward pass, these changes are probably worth adding.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think there's a great place to cache this information.
@shanjiaz We can leave this the kv cache refactor, if that's easier
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, definitely need to support older transformers versions that we’ve pinned
SUMMARY:
Use similar logic in the code base to determine which kv parameter name to use.
TEST PLAN:
Tested locally, should fix the kv cache errors.