Skip to content

[Wan 2.2 LoRA] add support for 2nd transformer lora loading + wan 2.2 lightx2v lora #12074

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 34 commits into from
Aug 19, 2025
Merged
Show file tree
Hide file tree
Changes from 10 commits
Commits
Show all changes
34 commits
Select commit Hold shift + click to select a range
96864fb
add alpha
linoytsaban Aug 5, 2025
0847255
load into 2nd transformer
linoytsaban Aug 5, 2025
dcce164
Merge branch 'main' into wan22-lightx2v
linoytsaban Aug 5, 2025
d083f86
Merge branch 'main' into wan22-lightx2v
linoytsaban Aug 7, 2025
5284a9c
Update src/diffusers/loaders/lora_conversion_utils.py
linoytsaban Aug 7, 2025
0a7be77
Update src/diffusers/loaders/lora_conversion_utils.py
linoytsaban Aug 7, 2025
b7e24d9
pr comments
linoytsaban Aug 7, 2025
bcb0924
pr comments
linoytsaban Aug 7, 2025
cabcf3d
pr comments
linoytsaban Aug 7, 2025
eda4d4b
Merge branch 'main' into wan22-lightx2v
linoytsaban Aug 7, 2025
4fdf400
fix
linoytsaban Aug 7, 2025
0ed988c
Merge remote-tracking branch 'origin/wan22-lightx2v' into wan22-lightx2v
linoytsaban Aug 8, 2025
f3afbf1
Merge branch 'main' into wan22-lightx2v
linoytsaban Aug 8, 2025
724b9a2
fix
linoytsaban Aug 8, 2025
daaa598
Merge remote-tracking branch 'origin/wan22-lightx2v' into wan22-lightx2v
linoytsaban Aug 8, 2025
6e8d333
Merge branch 'main' into wan22-lightx2v
linoytsaban Aug 11, 2025
b09fc48
Apply style fixes
github-actions[bot] Aug 11, 2025
af03f73
Merge branch 'main' into wan22-lightx2v
linoytsaban Aug 11, 2025
729252e
Merge branch 'main' into wan22-lightx2v
linoytsaban Aug 13, 2025
ea451d1
fix copies
Aug 13, 2025
18382f4
fix
linoytsaban Aug 13, 2025
4c425e2
fix copies
Aug 13, 2025
a57aa54
Merge branch 'main' into wan22-lightx2v
linoytsaban Aug 14, 2025
52ede6f
Merge branch 'main' into wan22-lightx2v
sayakpaul Aug 18, 2025
386cf1c
Update src/diffusers/loaders/lora_pipeline.py
linoytsaban Aug 18, 2025
64d9b04
Merge branch 'main' into wan22-lightx2v
linoytsaban Aug 18, 2025
2a5b07d
revert change
linoytsaban Aug 18, 2025
3c57672
Merge remote-tracking branch 'origin/wan22-lightx2v' into wan22-lightx2v
linoytsaban Aug 18, 2025
f1f1f33
revert change
linoytsaban Aug 18, 2025
d83a592
fix copies
Aug 18, 2025
c3cb4a6
Merge branch 'main' into wan22-lightx2v
sayakpaul Aug 19, 2025
ce5be55
up
sayakpaul Aug 19, 2025
5e21c4d
Merge branch 'main' into wan22-lightx2v
sayakpaul Aug 19, 2025
0559eac
fix
sayakpaul Aug 19, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
122 changes: 86 additions & 36 deletions src/diffusers/loaders/lora_conversion_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -1829,6 +1829,18 @@ def _convert_non_diffusers_wan_lora_to_diffusers(state_dict):
k.startswith("time_projection") and k.endswith(".weight") for k in original_state_dict
)

def get_alpha_scales(down_weight, alpha_key):
rank = down_weight.shape[0]
alpha = original_state_dict.pop(alpha_key).item()
scale = alpha / rank # LoRA is scaled by 'alpha / rank' in forward pass, so we need to scale it back here
scale_down = scale
scale_up = 1.0
while scale_down * 2 < scale_up:
scale_down *= 2
scale_up /= 2
return scale_down, scale_up


for key in list(original_state_dict.keys()):
if key.endswith((".diff", ".diff_b")) and "norm" in key:
# NOTE: we don't support this because norm layer diff keys are just zeroed values. We can support it
Expand All @@ -1848,15 +1860,26 @@ def _convert_non_diffusers_wan_lora_to_diffusers(state_dict):
for i in range(min_block, max_block + 1):
# Self-attention
for o, c in zip(["q", "k", "v", "o"], ["to_q", "to_k", "to_v", "to_out.0"]):
original_key = f"blocks.{i}.self_attn.{o}.{lora_down_key}.weight"
converted_key = f"blocks.{i}.attn1.{c}.lora_A.weight"
if original_key in original_state_dict:
converted_state_dict[converted_key] = original_state_dict.pop(original_key)
alpha_key = f"blocks.{i}.self_attn.{o}.alpha"
has_alpha = alpha_key in original_state_dict
original_key_A = f"blocks.{i}.self_attn.{o}.{lora_down_key}.weight"
converted_key_A = f"blocks.{i}.attn1.{c}.lora_A.weight"

original_key = f"blocks.{i}.self_attn.{o}.{lora_up_key}.weight"
converted_key = f"blocks.{i}.attn1.{c}.lora_B.weight"
if original_key in original_state_dict:
converted_state_dict[converted_key] = original_state_dict.pop(original_key)
original_key_B = f"blocks.{i}.self_attn.{o}.{lora_up_key}.weight"
converted_key_B = f"blocks.{i}.attn1.{c}.lora_B.weight"

if has_alpha:
down_weight = original_state_dict.pop(original_key_A)
up_weight = original_state_dict.pop(original_key_B)
scale_down, scale_up = get_alpha_scales(down_weight, alpha_key)
converted_state_dict[converted_key_A] = down_weight * scale_down
converted_state_dict[converted_key_B] = up_weight * scale_up

else:
if original_key_A in original_state_dict:
converted_state_dict[converted_key_A] = original_state_dict.pop(original_key_A)
if original_key_B in original_state_dict:
converted_state_dict[converted_key_B] = original_state_dict.pop(original_key_B)

original_key = f"blocks.{i}.self_attn.{o}.diff_b"
converted_key = f"blocks.{i}.attn1.{c}.lora_B.bias"
Expand All @@ -1865,15 +1888,24 @@ def _convert_non_diffusers_wan_lora_to_diffusers(state_dict):

# Cross-attention
for o, c in zip(["q", "k", "v", "o"], ["to_q", "to_k", "to_v", "to_out.0"]):
original_key = f"blocks.{i}.cross_attn.{o}.{lora_down_key}.weight"
converted_key = f"blocks.{i}.attn2.{c}.lora_A.weight"
if original_key in original_state_dict:
converted_state_dict[converted_key] = original_state_dict.pop(original_key)

original_key = f"blocks.{i}.cross_attn.{o}.{lora_up_key}.weight"
converted_key = f"blocks.{i}.attn2.{c}.lora_B.weight"
if original_key in original_state_dict:
converted_state_dict[converted_key] = original_state_dict.pop(original_key)
alpha_key = f"blocks.{i}.cross_attn.{o}.alpha"
has_alpha = alpha_key in original_state_dict
original_key_A = f"blocks.{i}.cross_attn.{o}.{lora_down_key}.weight"
converted_key_A = f"blocks.{i}.attn2.{c}.lora_A.weight"

original_key_B = f"blocks.{i}.cross_attn.{o}.{lora_up_key}.weight"
converted_key_B = f"blocks.{i}.attn2.{c}.lora_B.weight"

if original_key_A in original_state_dict:
down_weight = original_state_dict.pop(original_key_A)
converted_state_dict[converted_key_A] = down_weight
if original_key_B in original_state_dict:
up_weight = original_state_dict.pop(original_key_B)
converted_state_dict[converted_key_B] = up_weight
if has_alpha:
scale_down, scale_up = get_alpha_scales(down_weight, alpha_key)
converted_state_dict[converted_key_A] *= scale_down
converted_state_dict[converted_key_B] *= scale_up

original_key = f"blocks.{i}.cross_attn.{o}.diff_b"
converted_key = f"blocks.{i}.attn2.{c}.lora_B.bias"
Expand All @@ -1882,15 +1914,24 @@ def _convert_non_diffusers_wan_lora_to_diffusers(state_dict):

if is_i2v_lora:
for o, c in zip(["k_img", "v_img"], ["add_k_proj", "add_v_proj"]):
original_key = f"blocks.{i}.cross_attn.{o}.{lora_down_key}.weight"
converted_key = f"blocks.{i}.attn2.{c}.lora_A.weight"
if original_key in original_state_dict:
converted_state_dict[converted_key] = original_state_dict.pop(original_key)

original_key = f"blocks.{i}.cross_attn.{o}.{lora_up_key}.weight"
converted_key = f"blocks.{i}.attn2.{c}.lora_B.weight"
if original_key in original_state_dict:
converted_state_dict[converted_key] = original_state_dict.pop(original_key)
alpha_key = f"blocks.{i}.cross_attn.{o}.alpha"
has_alpha = alpha_key in original_state_dict
original_key_A = f"blocks.{i}.cross_attn.{o}.{lora_down_key}.weight"
converted_key_A = f"blocks.{i}.attn2.{c}.lora_A.weight"

original_key_B = f"blocks.{i}.cross_attn.{o}.{lora_up_key}.weight"
converted_key_B = f"blocks.{i}.attn2.{c}.lora_B.weight"

if original_key_A in original_state_dict:
down_weight = original_state_dict.pop(original_key_A)
converted_state_dict[converted_key_A] = down_weight
if original_key_B in original_state_dict:
up_weight = original_state_dict.pop(original_key_B)
converted_state_dict[converted_key_B] = up_weight
if has_alpha:
scale_down, scale_up = get_alpha_scales(down_weight, alpha_key)
converted_state_dict[converted_key_A] *= scale_down
converted_state_dict[converted_key_B] *= scale_up

original_key = f"blocks.{i}.cross_attn.{o}.diff_b"
converted_key = f"blocks.{i}.attn2.{c}.lora_B.bias"
Expand All @@ -1899,15 +1940,24 @@ def _convert_non_diffusers_wan_lora_to_diffusers(state_dict):

# FFN
for o, c in zip(["ffn.0", "ffn.2"], ["net.0.proj", "net.2"]):
original_key = f"blocks.{i}.{o}.{lora_down_key}.weight"
converted_key = f"blocks.{i}.ffn.{c}.lora_A.weight"
if original_key in original_state_dict:
converted_state_dict[converted_key] = original_state_dict.pop(original_key)

original_key = f"blocks.{i}.{o}.{lora_up_key}.weight"
converted_key = f"blocks.{i}.ffn.{c}.lora_B.weight"
if original_key in original_state_dict:
converted_state_dict[converted_key] = original_state_dict.pop(original_key)
alpha_key = f"blocks.{i}.{o}.alpha"
has_alpha = alpha_key in original_state_dict
original_key_A = f"blocks.{i}.{o}.{lora_down_key}.weight"
converted_key_A = f"blocks.{i}.ffn.{c}.lora_A.weight"

original_key_B = f"blocks.{i}.{o}.{lora_up_key}.weight"
converted_key_B = f"blocks.{i}.ffn.{c}.lora_B.weight"

if original_key_A in original_state_dict:
down_weight = original_state_dict.pop(original_key_A)
converted_state_dict[converted_key_A] = down_weight
if original_key_B in original_state_dict:
up_weight = original_state_dict.pop(original_key_B)
converted_state_dict[converted_key_B] = up_weight
if has_alpha:
scale_down, scale_up = get_alpha_scales(down_weight, alpha_key)
converted_state_dict[converted_key_A] *= scale_down
converted_state_dict[converted_key_B] *= scale_up

original_key = f"blocks.{i}.{o}.diff_b"
converted_key = f"blocks.{i}.ffn.{c}.lora_B.bias"
Expand Down Expand Up @@ -2072,4 +2122,4 @@ def _convert_non_diffusers_ltxv_lora_to_diffusers(state_dict, non_diffusers_pref
raise ValueError("Invalid LoRA state dict for LTX-Video.")
converted_state_dict = {k.removeprefix(f"{non_diffusers_prefix}."): v for k, v in state_dict.items()}
converted_state_dict = {f"transformer.{k}": v for k, v in converted_state_dict.items()}
return converted_state_dict
return converted_state_dict
38 changes: 28 additions & 10 deletions src/diffusers/loaders/lora_pipeline.py
Original file line number Diff line number Diff line change
Expand Up @@ -5064,7 +5064,7 @@ class WanLoraLoaderMixin(LoraBaseMixin):
Load LoRA layers into [`WanTransformer3DModel`]. Specific to [`WanPipeline`] and `[WanImageToVideoPipeline`].
"""

_lora_loadable_modules = ["transformer"]
_lora_loadable_modules = ["transformer", "transformer_2"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to note that this loader is shared amongst Wan 2.1 and 2.2 as the pipelines are also one and the same. For Wan 2.1, we won't have any transformer_2.

transformer_name = TRANSFORMER_NAME

@classmethod
Expand Down Expand Up @@ -5269,15 +5269,33 @@ def load_lora_weights(
if not is_correct_format:
raise ValueError("Invalid LoRA checkpoint.")

self.load_lora_into_transformer(
state_dict,
transformer=getattr(self, self.transformer_name) if not hasattr(self, "transformer") else self.transformer,
adapter_name=adapter_name,
metadata=metadata,
_pipeline=self,
low_cpu_mem_usage=low_cpu_mem_usage,
hotswap=hotswap,
)
load_into_transformer_2 = kwargs.pop("load_into_transformer_2", False)
if load_into_transformer_2:
if geattr(self, "transformer_2", None) is None:
raise ValueError(
"Cannot load LoRA into transformer_2: transformer_2 is not available for this model"
"Ensure the model has a transformer_2 component before setting load_into_transformer_2=True."
)
self.load_lora_into_transformer(
state_dict,
transformer=self.transformer_2,
adapter_name=adapter_name,
metadata=metadata,
_pipeline=self,
low_cpu_mem_usage=low_cpu_mem_usage,
hotswap=hotswap,
)
else:
self.load_lora_into_transformer(
state_dict,
transformer=getattr(self, self.transformer_name) if not hasattr(self,
"transformer") else self.transformer,
adapter_name=adapter_name,
metadata=metadata,
_pipeline=self,
low_cpu_mem_usage=low_cpu_mem_usage,
hotswap=hotswap,
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why put it under else?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

my thought process was that, as opposed to LoRAs with weights for the transformer and text encoder for example, that we load in one load_lora_weights op, here we can have a situation where we have different weights for each transformer, but the state_dict keys are identical. Also, this way we can load the lora into each transformer separately with different adapter names - making it easy to use different scales for each transformer lora (which was seen to be beneficial for quality). I'm happy to improve this logic, but these are the considerations to keep in mind

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. So, in case users want to load both transformers, won't it just load one if load_into_transformer_2=True?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep it would, they would need to load separately to each

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you show some pseudo-code expected from the users? This is another way of loading another adapter into transformer_2:
#12040 (comment)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't feel strongly about it staying that exact way, but i do think it should remain possible to load different lora weights into the transformers and in different scales

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. Let's go with this but with a note in the docstrings saying it's experimental in nature.


@classmethod
# Copied from diffusers.loaders.lora_pipeline.SD3LoraLoaderMixin.load_lora_into_transformer with SD3Transformer2DModel->WanTransformer3DModel
Expand Down
Loading