Skip to content

Commit 90818e8

Browse files
authored
[docs] Fix syntax error in quantization configuration (#13076)
Fix syntax error in quantization configuration
1 parent 430c557 commit 90818e8

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

docs/source/en/quantization/torchao.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ from diffusers import DiffusionPipeline, PipelineQuantizationConfig, TorchAoConf
6666
from torchao.quantization import Int4WeightOnlyConfig
6767

6868
pipeline_quant_config = PipelineQuantizationConfig(
69-
quant_mapping={"transformer": TorchAoConfig(Int4WeightOnlyConfig(group_size=128)))}
69+
quant_mapping={"transformer": TorchAoConfig(Int4WeightOnlyConfig(group_size=128))}
7070
)
7171
pipeline = DiffusionPipeline.from_pretrained(
7272
"black-forest-labs/FLUX.1-dev",

0 commit comments

Comments
 (0)