-
Notifications
You must be signed in to change notification settings - Fork 552
nits on any-to-any task #1372
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nits on any-to-any task #1372
Conversation
| from qwen_omni_utils import process_mm_info | ||
|
|
||
| model = Qwen2_5OmniModel.from_pretrained("Qwen/Qwen2.5-Omni-7B", torch_dtype="auto", device_map="auto") | ||
| from transformers import Qwen2_5OmniForConditionalGeneration, Qwen2_5OmniProcessor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code snippet changes:
Qwen2_5OmniModelcannot be imported.- System prompt:
contentrequires an array when applying the chat template. - Apply the template, tokenize and prepare inputs using a single call to the processor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm I think we need to update docs, it seems final version of API is different
Vaibhavs10
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should update this in the Qwen Omni model repo too
Yes, I'll wait to see if there's any discussion in huggingface/transformers#37660 |
merveenoyan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks a lot!
|
it seems they recently updated the model card so no need |
No description provided.