Skip to content

Commit 78031c2

Browse files
authored
[Fix] enable_xformers_memory_efficient_attention() in Flux Pipeline (#12337)
* FIxes enable_xformers_memory_efficient_attention() * Update attention.py
1 parent d83d35c commit 78031c2

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

src/diffusers/models/attention.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -241,7 +241,7 @@ def set_use_memory_efficient_attention_xformers(
241241
op_fw, op_bw = attention_op
242242
dtype, *_ = op_fw.SUPPORTED_DTYPES
243243
q = torch.randn((1, 2, 40), device="cuda", dtype=dtype)
244-
_ = xops.memory_efficient_attention(q, q, q)
244+
_ = xops.ops.memory_efficient_attention(q, q, q)
245245
except Exception as e:
246246
raise e
247247

0 commit comments

Comments
 (0)