Skip to content

Conversation

@pcuenca
Copy link
Member

@pcuenca pcuenca commented Apr 21, 2025

No description provided.

from qwen_omni_utils import process_mm_info

model = Qwen2_5OmniModel.from_pretrained("Qwen/Qwen2.5-Omni-7B", torch_dtype="auto", device_map="auto")
from transformers import Qwen2_5OmniForConditionalGeneration, Qwen2_5OmniProcessor
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code snippet changes:

  • Qwen2_5OmniModel cannot be imported.
  • System prompt: content requires an array when applying the chat template.
  • Apply the template, tokenize and prepare inputs using a single call to the processor.

Copy link
Contributor

@merveenoyan merveenoyan Apr 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm I think we need to update docs, it seems final version of API is different

Copy link
Contributor

@Vaibhavs10 Vaibhavs10 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should update this in the Qwen Omni model repo too

@pcuenca
Copy link
Member Author

pcuenca commented Apr 22, 2025

we should update this in the Qwen Omni model repo too

Yes, I'll wait to see if there's any discussion in huggingface/transformers#37660

Copy link
Contributor

@merveenoyan merveenoyan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks a lot!

@merveenoyan merveenoyan merged commit 930e30b into main Apr 22, 2025
5 checks passed
@merveenoyan merveenoyan deleted the pcuenca-patch-2 branch April 22, 2025 12:07
@merveenoyan
Copy link
Contributor

it seems they recently updated the model card so no need

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants