Skip to content

Conversation

The-truthh
Copy link
Contributor

@The-truthh The-truthh commented Aug 5, 2025

What does this PR do?

based on: huggingface/diffusers#11518

Add

  • skyreels v2 pipelines and required modules, comparable with diffusers master
    • mindone.diffusers.SkyReelsV2DiffusionForcingPipeline
    • mindone.diffusers.SkyReelsV2DiffusionForcingImageToVideoPipeline
    • mindone.diffusers.SkyReelsV2DiffusionForcingVideoToVideoPipeline
    • mindone.diffusers.SkyReelsV2Pipeline
    • mindone.diffusers.SkyReelsV2ImageToVideoPipeline
    • mindone.diffusers.SkyReelsV2Transformer3DModel
  • add UTs of above modules and all passed

Usage

  • SkyReelsV2Pipeline
import mindspore as ms
from mindone.diffusers import (
    SkyReelsV2Pipeline,
    UniPCMultistepScheduler,
    AutoencoderKLWan,
)
from mindone.diffusers.utils import export_to_video

# Load the pipeline
# Available models:
# - Skywork/SkyReels-V2-T2V-14B-540P-Diffusers
# - Skywork/SkyReels-V2-T2V-14B-720P-Diffusers
vae = AutoencoderKLWan.from_pretrained(
    "Skywork/SkyReels-V2-T2V-14B-720P-Diffusers",
    subfolder="vae",
    mindspore_dtype=ms.float32,
)
pipe = SkyReelsV2Pipeline.from_pretrained(
    "Skywork/SkyReels-V2-T2V-14B-720P-Diffusers",
    vae=vae,
    mindspore_dtype=ms.bfloat16,
)
flow_shift = 8.0  # 8.0 for T2V, 5.0 for I2V
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config, flow_shift=flow_shift)

prompt = "A cat and a dog baking a cake together in a kitchen. The cat is carefully measuring flour, while the dog is stirring the batter with a wooden spoon. The kitchen is cozy, with sunlight streaming through the window."  # noqa: E501

output = pipe(
    prompt=prompt,
    num_inference_steps=50,
    height=544,
    width=960,
    guidance_scale=6.0,  # 6.0 for T2V, 5.0 for I2V
    num_frames=97,
)[0][0]
export_to_video(output, "video.mp4", fps=24, quality=8)
  • SkyReelsV2DiffusionForcingPipeline
import mindspore as ms
from mindone.diffusers import (
    SkyReelsV2DiffusionForcingPipeline,
    UniPCMultistepScheduler,
    AutoencoderKLWan,
)
from diffusers.utils import export_to_video

# Load the pipeline
# Available models:
# - Skywork/SkyReels-V2-DF-1.3B-540P-Diffusers
# - Skywork/SkyReels-V2-DF-14B-540P-Diffusers
# - Skywork/SkyReels-V2-DF-14B-720P-Diffusers
vae = AutoencoderKLWan.from_pretrained(
    "Skywork/SkyReels-V2-DF-1.3B-540P-Diffusers",
    subfolder="vae",
    mindspore_dtype=ms.float32,
)
pipe = SkyReelsV2DiffusionForcingPipeline.from_pretrained(
    "Skywork/SkyReels-V2-DF-1.3B-540P-Diffusers",
    vae=vae,
    mindspore_dtype=ms.bfloat16,
)
flow_shift = 8.0  # 8.0 for T2V, 5.0 for I2V
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config, flow_shift=flow_shift)

prompt = "A cat and a dog baking a cake together in a kitchen. The cat is carefully measuring flour, while the dog is stirring the batter with a wooden spoon. The kitchen is cozy, with sunlight streaming through the window."  # noqa: E501

output = pipe(
    prompt=prompt,
    num_inference_steps=30,
    height=544,
    width=960,
    guidance_scale=6.0,  # 6.0 for T2V, 5.0 for I2V
    num_frames=97,
    ar_step=5,  # Controls asynchronous inference (0 for synchronous mode)
    causal_block_size=5,  # Number of frames processed together in a causal block
    overlap_history=None,  # Number of frames to overlap for smooth transitions in long videos
    addnoise_condition=20,  # Improves consistency in long video generation
)[0][0]
export_to_video(output, "video.mp4", fps=24, quality=8)
  • SkyReelsV2DiffusionForcingImageToVideoPipeline
import mindspore as ms
from mindone.diffusers import (
    SkyReelsV2DiffusionForcingImageToVideoPipeline,
    UniPCMultistepScheduler,
    AutoencoderKLWan,
)
from mindone.diffusers.utils import export_to_video
from PIL import Image

# Load the pipeline
# Available models:
# - Skywork/SkyReels-V2-DF-1.3B-540P-Diffusers
# - Skywork/SkyReels-V2-DF-14B-540P-Diffusers
# - Skywork/SkyReels-V2-DF-14B-720P-Diffusers
vae = AutoencoderKLWan.from_pretrained(
    "Skywork/SkyReels-V2-DF-1.3B-540P-Diffusers",
    subfolder="vae",
    mindspore_dtype=ms.float32,
)
pipe = SkyReelsV2DiffusionForcingImageToVideoPipeline.from_pretrained(
    "Skywork/SkyReels-V2-DF-14B-720P-Diffusers",
    vae=vae,
    mindspore_dtype=ms.bfloat16,
)
flow_shift = 5.0  # 8.0 for T2V, 5.0 for I2V
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config, flow_shift=flow_shift)

prompt = "A cat and a dog baking a cake together in a kitchen. The cat is carefully measuring flour, while the dog is stirring the batter with a wooden spoon. The kitchen is cozy, with sunlight streaming through the window."  # noqa: E501
image = Image.open("path/to/image.png")

output = pipe(
    image=image,
    prompt=prompt,
    num_inference_steps=50,
    height=544,
    width=960,
    guidance_scale=5.0,  # 6.0 for T2V, 5.0 for I2V
    num_frames=97,
    ar_step=0,  # Controls asynchronous inference (0 for synchronous mode)
    overlap_history=None,  # Number of frames to overlap for smooth transitions in long videos
    addnoise_condition=20,  # Improves consistency in long video generation
)[0][0]
export_to_video(output, "video.mp4", fps=24, quality=8)
  • SkyReelsV2DiffusionForcingVideoToVideoPipeline
import mindspore as ms
from mindone.diffusers import (
    SkyReelsV2DiffusionForcingVideoToVideoPipeline,
    UniPCMultistepScheduler,
    AutoencoderKLWan,
)
from mindone.diffusers.utils import export_to_video

# Load the pipeline
# Available models:
# - Skywork/SkyReels-V2-DF-1.3B-540P-Diffusers
# - Skywork/SkyReels-V2-DF-14B-540P-Diffusers
# - Skywork/SkyReels-V2-DF-14B-720P-Diffusers
vae = AutoencoderKLWan.from_pretrained(
    "Skywork/SkyReels-V2-DF-1.3B-540P-Diffusers",
    subfolder="vae",
    mindspore_dtype=ms.float32,
)
pipe = SkyReelsV2DiffusionForcingVideoToVideoPipeline.from_pretrained(
    "Skywork/SkyReels-V2-DF-1.3B-540P-Diffusers",
    vae=vae,
    mindspore_dtype=ms.bfloat16,
)
flow_shift = 8.0  # 8.0 for T2V, 5.0 for I2V
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config, flow_shift=flow_shift)

prompt = "A cat and a dog baking a cake together in a kitchen. The cat is carefully measuring flour, while the dog is stirring the batter with a wooden spoon. The kitchen is cozy, with sunlight streaming through the window."  # noqa: E501

output = pipe(
    prompt=prompt,
    num_inference_steps=50,
    height=544,
    width=960,
    guidance_scale=6.0,  # 6.0 for T2V, 5.0 for I2V
    num_frames=97,
    ar_step=0,  # Controls asynchronous inference (0 for synchronous mode)
    overlap_history=None,  # Number of frames to overlap for smooth transitions in long videos
    addnoise_condition=20,  # Improves consistency in long video generation
)[0][0]
export_to_video(output, "video.mp4", fps=24, quality=8)
  • SkyReelsV2ImageToVideoPipeline
import mindspore as ms
from mindone.diffusers import (
    SkyReelsV2ImageToVideoPipeline,
    UniPCMultistepScheduler,
    AutoencoderKLWan,
)
from mindone.diffusers.utils import export_to_video
from PIL import Image

# Load the pipeline
# Available models:
# - Skywork/SkyReels-V2-I2V-1.3B-540P-Diffusers
# - Skywork/SkyReels-V2-I2V-14B-540P-Diffusers
# - Skywork/SkyReels-V2-I2V-14B-720P-Diffusers
vae = AutoencoderKLWan.from_pretrained(
    "Skywork/SkyReels-V2-I2V-1.3B-540P-Diffusers",
    subfolder="vae",
    mindspore_dtype=ms.float32,
)
pipe = SkyReelsV2ImageToVideoPipeline.from_pretrained(
    "Skywork/SkyReels-V2-I2V-1.3B-540P-Diffusers",
    vae=vae,
    mindspore_dtype=ms.bfloat16,
)
flow_shift = 5.0  # 8.0 for T2V, 5.0 for I2V
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config, flow_shift=flow_shift)

prompt = "A cat and a dog baking a cake together in a kitchen. The cat is carefully measuring flour, while the dog is stirring the batter with a wooden spoon. The kitchen is cozy, with sunlight streaming through the window."  # noqa: E501
image = Image.open("path/to/image.png")

output = pipe(
    image=image,
    prompt=prompt,
    num_inference_steps=50,
    height=544,
    width=960,
    guidance_scale=5.0,  # 6.0 for T2V, 5.0 for I2V
    num_frames=97,
)[0][0]
export_to_video(output, "video.mp4", fps=24, quality=8)

Performance

Experiments are tested on Ascend Atlas 800T A2 machines with mindspore 2.6.0.

pipeline mode speed
SkyReelsV2Pipeline pynative 26.17s/it
SkyReelsV2DiffusionForcingPipeline pynative 23.85s/it
SkyReelsV2DiffusionForcingImageToVideoPipeline pynative 22.04s/it
SkyReelsV2DiffusionForcingVideoToVideoPipeline pynative 28.94s/it
SkyReelsV2ImageToVideoPipeline pynative 21.53s/it
SkyReelsV2Pipeline jit 26.30s/it
SkyReelsV2DiffusionForcingPipeline jit 24.85s/it
SkyReelsV2DiffusionForcingImageToVideoPipeline jit 23.39s/it
SkyReelsV2DiffusionForcingVideoToVideoPipeline jit 23.00s/it
SkyReelsV2ImageToVideoPipeline jit 22.28s/it

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline?
  • Did you make sure to update the documentation with your changes? E.g. record bug fixes or new features in What's New. Here are the
    documentation guidelines
  • Did you build and run the code without any errors?
  • Did you report the running environment (NPU type/MS version) and performance in the doc? (better record it for data loading, model inference, or training tasks)
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@xxx

@The-truthh The-truthh requested a review from vigo999 as a code owner August 5, 2025 09:18
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @The-truthh, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

I've introduced the new SkyReelsV2 pipelines into the diffusers library. This feature significantly expands our video generation capabilities by integrating advanced 3D transformer models and various diffusion forcing strategies for text-to-video, image-to-video, and video-to-video tasks.

Highlights

  • Core Model Integration: I've added the SkyReelsV2Transformer3DModel, a specialized 3D transformer designed for efficient video data processing.
  • Comprehensive Pipeline Suite: I've implemented a full set of SkyReelsV2 pipelines, including standard text-to-video and image-to-video, as well as advanced diffusion forcing variants for improved temporal consistency in long video generation.
  • LoRA Compatibility: I've ensured compatibility with LoRA (Low-Rank Adaptation) by adding a dedicated SkyReelsV2LoraLoaderMixin, allowing for flexible model customization and fine-tuning.
  • Enhanced Scheduler Functionality: I've updated the UniPCMultistepScheduler to support use_flow_sigmas, which is crucial for optimizing the sampling process in these new video models.
  • Robust Testing Framework: I've included a comprehensive suite of new tests for all SkyReelsV2 pipelines, covering various generation modes and ensuring the stability and correctness of the new features.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for SkyReelsV2 pipelines, including text-to-video, image-to-video, and video-to-video capabilities, along with diffusion forcing variants. The implementation largely follows existing patterns for new pipelines and models within the library. The changes are extensive, introducing new transformer models, LoRA loaders, and pipeline files.

My review identified a couple of issues: one is a configuration mismatch in a test file that would cause it to fail, and the other is a logic error in the video-to-video pipeline that results in incorrect output. I've provided suggestions to fix both.

patch_size=(1, 2, 2),
num_attention_heads=2,
attention_head_dim=12,
in_channels=36,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The in_channels for the SkyReelsV2Transformer3DModel is configured to be 36, but the pipeline prepares an input tensor with 33 channels. The input to the transformer is a concatenation of latents (16 channels), a mask (1 channel), and latent_condition (16 channels), totaling 33 channels. This mismatch will cause a runtime error. Please adjust the in_channels to match the data prepared by the pipeline.

Suggested change
in_channels=36,
in_channels=33,

)
latents = latents / latents_std + latents_mean
video_generated = self.vae.decode(latents, return_dict=False)[0]
video = mint.cat([video_original, video_generated], dim=2)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The final video is created by concatenating the original input video with the entire generated video. For a video continuation task, this is incorrect as it results in a video longer than intended, with duplicated content. The video_generated tensor should already represent the full-length output video, with its initial frames guided by the input video. The concatenation should be removed.

Suggested change
video = mint.cat([video_original, video_generated], dim=2)
video = video_generated

@The-truthh The-truthh force-pushed the diffusers-skyreels branch 12 times, most recently from 72c7ac5 to 5d0ee1e Compare August 11, 2025 12:51
@The-truthh The-truthh force-pushed the diffusers-skyreels branch 14 times, most recently from b1b8e23 to fa08697 Compare August 20, 2025 07:26
@The-truthh The-truthh force-pushed the diffusers-skyreels branch 3 times, most recently from c71e64a to d64520e Compare August 21, 2025 02:06
@Cui-yshoho Cui-yshoho added the new model add new model to mindone label Aug 29, 2025
@vigo999 vigo999 added this pull request to the merge queue Aug 29, 2025
Merged via the queue into mindspore-lab:master with commit 7b77518 Aug 29, 2025
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
new model add new model to mindone
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants