-
Notifications
You must be signed in to change notification settings - Fork 88
feat(diffusers/pipelines): add pipelines of skyreels_v2 in diffusers master #1203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @The-truthh, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
I've introduced the new SkyReelsV2 pipelines into the diffusers library. This feature significantly expands our video generation capabilities by integrating advanced 3D transformer models and various diffusion forcing strategies for text-to-video, image-to-video, and video-to-video tasks.
Highlights
- Core Model Integration: I've added the SkyReelsV2Transformer3DModel, a specialized 3D transformer designed for efficient video data processing.
- Comprehensive Pipeline Suite: I've implemented a full set of SkyReelsV2 pipelines, including standard text-to-video and image-to-video, as well as advanced diffusion forcing variants for improved temporal consistency in long video generation.
- LoRA Compatibility: I've ensured compatibility with LoRA (Low-Rank Adaptation) by adding a dedicated SkyReelsV2LoraLoaderMixin, allowing for flexible model customization and fine-tuning.
- Enhanced Scheduler Functionality: I've updated the UniPCMultistepScheduler to support use_flow_sigmas, which is crucial for optimizing the sampling process in these new video models.
- Robust Testing Framework: I've included a comprehensive suite of new tests for all SkyReelsV2 pipelines, covering various generation modes and ensuring the stability and correctness of the new features.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds support for SkyReelsV2 pipelines, including text-to-video, image-to-video, and video-to-video capabilities, along with diffusion forcing variants. The implementation largely follows existing patterns for new pipelines and models within the library. The changes are extensive, introducing new transformer models, LoRA loaders, and pipeline files.
My review identified a couple of issues: one is a configuration mismatch in a test file that would cause it to fail, and the other is a logic error in the video-to-video pipeline that results in incorrect output. I've provided suggestions to fix both.
patch_size=(1, 2, 2), | ||
num_attention_heads=2, | ||
attention_head_dim=12, | ||
in_channels=36, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The in_channels
for the SkyReelsV2Transformer3DModel
is configured to be 36, but the pipeline prepares an input tensor with 33 channels. The input to the transformer is a concatenation of latents
(16 channels), a mask (1 channel), and latent_condition
(16 channels), totaling 33 channels. This mismatch will cause a runtime error. Please adjust the in_channels
to match the data prepared by the pipeline.
in_channels=36, | |
in_channels=33, |
) | ||
latents = latents / latents_std + latents_mean | ||
video_generated = self.vae.decode(latents, return_dict=False)[0] | ||
video = mint.cat([video_original, video_generated], dim=2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The final video is created by concatenating the original input video with the entire generated video. For a video continuation task, this is incorrect as it results in a video longer than intended, with duplicated content. The video_generated
tensor should already represent the full-length output video, with its initial frames guided by the input video. The concatenation should be removed.
video = mint.cat([video_original, video_generated], dim=2) | |
video = video_generated |
72c7ac5
to
5d0ee1e
Compare
b1b8e23
to
fa08697
Compare
fa08697
to
51ef526
Compare
c71e64a
to
d64520e
Compare
d64520e
to
7b23333
Compare
What does this PR do?
based on: huggingface/diffusers#11518
Add
Usage
Performance
Experiments are tested on Ascend Atlas 800T A2 machines with mindspore 2.6.0.
Before submitting
What's New
. Here are thedocumentation guidelines
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@xxx