Skip to content

[Issue]: SD.next report conv2d error and halts generating when launching animatediff. #4518

@tex2100

Description

@tex2100

Issue Description

My laptop can’t launch FramePack because it does not have a DGPU. Instead, I had launched Animatediff at SD.next, but it crashed and reported a conv2d error. according to grok analyze, exisiting diffuser pipleline is crashed with animatediff pipeline.

Version Platform Description

AMD ryzen ai 340, radeon 840m, 32gb ddr5

Relevant log output

PS C:\ai\sdnext> .\webui-user.bat
23:50:03-807616 INFO     Starting SD.Next
23:50:03-810620 INFO     Logger: file="C:\ai\sdnext\sdnext.log" level=DEBUG host="DESKTOP-SGFT5T6" size=83 mode=create
23:50:03-812619 INFO     Python: version=3.11.9 platform=Windows
                         bin="C:\Users\winbbs\AppData\Local\Programs\Python\Python311\python.exe"
                         venv="C:\Users\winbbs\AppData\Local\Programs\Python\Python311"
23:50:04-187069 INFO     Version: app=sd.next updated=2026-01-01 commit=98d304a49 branch=dev
                         url=https://github.com/vladmandic/sdnext/tree/dev kanvas=main ui=dev
23:50:05-048739 TRACE    Repository branches: active=dev available=['dev', 'master', 'upstream']
23:50:05-357751 INFO     Version: app=sd.next latest=2026-01-01T10:22:43Z hash=98d304a4 branch=dev
23:50:05-370753 INFO     Platform: arch=AMD64 cpu=AMD64 Family 26 Model 96 Stepping 0, AuthenticAMD system=Windows
                         release=Windows-10-10.0.26200-SP0 python=3.11.9 locale=('Korean_Korea', '949') docker=False
23:50:05-372757 DEBUG    Packages: prefix=..\..\Users\winbbs\AppData\Local\Programs\Python\Python311
                         site=['..\\..\\Users\\winbbs\\AppData\\Local\\Programs\\Python\\Python311',
                         '..\\..\\Users\\winbbs\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages']
23:50:05-373753 INFO     Args: ['--use-openvino', '--update']
23:50:05-374757 DEBUG    Setting environment tuning
23:50:05-375753 DEBUG    Torch allocator: "garbage_collection_threshold:0.80,max_split_size_mb:512"
23:50:05-376751 INFO     Verifying torch installation
23:50:05-377751 DEBUG    Torch overrides: cuda=False rocm=False ipex=False directml=False openvino=True zluda=False
23:50:05-378756 INFO     OpenVINO: selected
23:50:08-851620 INFO     Install: verifying requirements
23:50:08-865623 INFO     Startup: standard
23:50:08-866622 INFO     Verifying submodules
23:50:11-967331 DEBUG    Git submodule: extensions-builtin/sd-extension-chainner / main
23:50:13-000186 DEBUG    Git submodule: extensions-builtin/sd-extension-system-info / main
23:50:14-096793 DEBUG    Git submodule: extensions-builtin/sdnext-kanvas / main
23:50:15-254761 DEBUG    Git submodule: extensions-builtin/sdnext-modernui / dev
23:50:16-483354 DEBUG    Git submodule: extensions-builtin/stable-diffusion-webui-rembg / master
23:50:17-610514 DEBUG    Git submodule: wiki / master
23:50:18-592488 DEBUG    Installed packages: 284
23:50:18-594613 DEBUG    Extensions all: ['sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sdnext-kanvas', 'sdnext-modernui', 'stable-diffusion-webui-rembg']
23:50:18-694966 DEBUG    Git submodule: extensions-builtin\sd-extension-chainner / main
23:50:19-810119 DEBUG    Git submodule: extensions-builtin\sd-extension-system-info / main
23:50:21-845470 ERROR    Git: folder="extensions-builtin\sd-webui-agent-scheduler" could not identify repository
23:50:22-008982 DEBUG    Git submodule: extensions-builtin\sdnext-kanvas / main
23:50:23-200422 DEBUG    Git submodule: extensions-builtin\sdnext-modernui / dev
23:50:24-429407 DEBUG    Git submodule: extensions-builtin\stable-diffusion-webui-rembg / master
23:50:25-314396 DEBUG    Extension installer: builtin=True
                         file="C:\ai\sdnext\extensions-builtin\stable-diffusion-webui-rembg\install.py"
23:50:25-379327 DEBUG    Extensions all: []
23:50:25-380331 INFO     Extensions enabled: ['sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sdnext-kanvas', 'sdnext-modernui', 'stable-diffusion-webui-rembg']
23:50:25-446836 INFO     Install: verifying requirements
23:50:25-450510 INFO     Updating Wiki
23:50:25-544435 DEBUG    Git submodule: C:\ai\sdnext\wiki / master
23:50:26-489975 WARNING  Setup complete with errors: ['git: extensions-builtin\\sd-webui-agent-scheduler']
23:50:26-490975 WARNING  See log file for more details: C:\ai\sdnext\sdnext.log
23:50:26-494979 DEBUG    Extension preload: {'extensions-builtin': 0.0, 'extensions': 0.0}
23:50:26-495975 INFO     Installer time: total=56.35 git=17.33 update=14.12 submodules=9.66 torch=3.23 latest=1.54
                         branch=1.44 sdnext-modernui=1.24 sdnext-kanvas=1.19 sd-extension-system-info=1.14
                         sd-extension-chainner=1.12 sd-webui-agent-scheduler=1.06 stable-diffusion-webui-rembg=1.05
                         wiki=1.04 base=0.37 version=0.36 files=0.18 requirements=0.14 installed=0.13
23:50:26-497974 INFO     Command line args: ['--use-openvino', '--update'] upgrade=True use_openvino=True args=[]
23:50:26-497974 DEBUG    Env flags: []
23:50:26-498975 DEBUG    Starting module: <module 'webui' from 'C:\\ai\\sdnext\\webui.py'>
23:50:58-122529 INFO     Torch: torch==2.6.0+cpu torchvision==0.21.0+cpu
23:50:58-125530 INFO     Packages: diffusers==0.36.0.dev0 transformers==4.57.3 accelerate==1.12.0 gradio==3.43.2
                         pydantic==1.10.21 numpy==2.1.2 cv2==4.12.0
HIP Library Path: C:\WINDOWS\System32\amdhip64_7.dll
23:50:58-895829 DEBUG    ONNX: version=1.22.1, available=['AzureExecutionProvider', 'CPUExecutionProvider']
23:50:59-267699 DEBUG    State initialized: id=2110282198864
23:51:00-067941 WARNING  GPU stats: Torch not compiled with CUDA enabled
23:51:00-068944 DEBUG    Triton: pass=False fn=<module>:has_triton time=0.00
23:51:00-201225 DEBUG    Read: file="C:\ai\sdnext\config.json" json=28 bytes=1089 time=0.000 fn=<module>:load
23:51:00-203226 INFO     Engine: backend=Backend.DIFFUSERS compute=openvino device=cpu attention="Scaled-Dot-Product"
                         mode=no_grad
23:51:00-213228 DEBUG    Read: file="html\reference.json" json=151 bytes=72399 time=0.009
                         fn=_call_with_frames_removed:<module>
23:51:00-214227 DEBUG    Torch attention: type="sdpa" kernels=['Flash', 'Memory', 'Math'] overrides=[]
23:51:00-216228 DEBUG    Torch attention installed: flashattn=False sageattention=False
23:51:00-216228 WARNING  Torch SDPA: module 'diffusers.models.attention_dispatch' has no attribute '_CAN_USE_AITER_ATTN'
23:51:00-217231 DEBUG    Triton: pass=False fn=<module>:set_cuda_params time=0.00
23:51:00-234417 WARNING  OpenVINO: device=CPU no compatible GPU detected
23:51:00-237415 INFO     Torch parameters: backend=openvino device=CPU config=FP32 dtype=torch.float32 context=no_grad
                         nohalf=False nohalfvae=False upcast=False deterministic=False tunable=[False, False] fp16=fail
                         bf16=fail triton=fail optimization="Scaled-Dot-Product"
23:51:00-249411 DEBUG    Quantization: registered=SDNQ
23:51:00-273410 INFO     Device: device=AMD Ryzen AI 5 340 w/ Radeon 840M               openvino=2026.0.0.dev20251128
23:51:00-674304 DEBUG    Entering start sequence
23:51:00-680112 DEBUG    Initializing
23:51:00-689114 DEBUG    Read: file="metadata.json" json=2 bytes=1546 time=0.008 fn=initialize:init_metadata
23:51:00-698111 DEBUG    Read: file="cache.json" json=1 bytes=350 time=0.007 fn=initialize:init_cache
23:51:00-756203 INFO     Available VAEs: path="models\VAE" items=0
23:51:00-758203 INFO     Available UNets: path="models\UNET" items=0
23:51:00-759201 INFO     Available TEs: path="models\Text-encoder" items=0
23:51:00-766440 INFO     Available Models: safetensors="models\Stable-diffusion":2 diffusers="models\Diffusers":3
                         reference=151 items=5 time=0.00
23:51:00-780442 INFO     Available LoRAs: path="models\Lora" items=0 folders=2 time=0.00
23:51:01-591721 INFO     Available Styles: path="models\styles" items=288 time=0.81
23:51:01-698771 INFO     Available Detailer: path="models\yolo" items=15 downloaded=7
23:51:01-700778 DEBUG    Extensions: disabled=['sdnext-modernui']
23:51:01-701770 INFO     Load extensions
23:51:01-979355 DEBUG    Extensions init time: total=0.28
23:51:02-267128 DEBUG    Read: file="html/upscalers.json" json=4 bytes=2672 time=0.011 fn=__init__:__init__
23:51:02-279992 DEBUG    Read: file="extensions-builtin\sd-extension-chainner\models.json" json=25 bytes=2830 time=0.010
                         fn=__init__:find_scalers
23:51:02-282257 DEBUG    Available chaiNNer: path="models\chaiNNer" defined=25 discovered=0 downloaded=0
23:51:02-286244 INFO     Available Upscalers: items=76 downloaded=0 user=0 time=0.30 types=['None', 'Resize', 'Latent',
                         'AsymmetricVAE', 'WanUpscale', 'DCC', 'VIPS', 'ChaiNNer', 'AuraSR', 'ESRGAN', 'RealESRGAN',
                         'SCUNet', 'Diffusion', 'SeedVR', 'SwinIR']
23:51:02-337878 INFO     Networks: type="video" engines=13 models=67 errors=0 time=0.04
23:51:02-342525 INFO     Huggingface: transfer=rust parallel=True direct=False token="None" cache="models\huggingface"
                         init
23:51:02-359253 WARNING  Cache location changed: previous="C:\Users\winbbs\.cache\huggingface\hub" size=3657 MB
23:51:02-368253 DEBUG    Huggingface: cache="models\huggingface" size=7682 MB
23:51:02-369254 DEBUG    UI start sequence
23:51:02-370396 DEBUG    UI image support: kanvas=main
23:51:02-395168 INFO     UI locale: name="Auto"
23:51:02-396166 INFO     UI theme: type=Standard name="black-teal" available=14
23:51:02-398166 DEBUG    UI theme: css="C:\ai\sdnext\javascript\black-teal.css" base="['sdnext.css', 'timesheet.css']"
                         user="None"
23:51:02-401167 DEBUG    UI initialize: tab=txt2img
23:51:02-443120 DEBUG    Read: file="html\reference.json" json=151 bytes=72399 time=0.001 fn=list_items:list_reference
23:51:02-464918 DEBUG    Networks: type="reference" items={'total': 151, 'ready': 0, 'hidden': 0, 'experimental': 0,
                         'base': 95, 'distilled': 18, 'quantized': 19, 'community': 15, 'cloud': 2}
23:51:02-473917 DEBUG    Networks: type="model" items=154 subfolders=8 tab=txt2img folders=['models\\Stable-diffusion',
                         'models\\Reference', 'C:\\ai\\sdnext\\models\\Stable-diffusion'] list=0.06 thumb=0.00 desc=0.00
                         info=0.01 workers=12
23:51:02-475917 DEBUG    Networks: type="lora" items=0 subfolders=1 tab=txt2img folders=['models\\Lora'] list=0.01
                         thumb=0.00 desc=0.00 info=0.00 workers=12
23:51:02-484917 DEBUG    Networks: type="style" items=288 subfolders=3 tab=txt2img folders=['models\\styles', 'html']
                         list=0.02 thumb=0.00 desc=0.00 info=0.00 workers=12
23:51:02-487214 DEBUG    Networks: type="wildcards" items=0 subfolders=1 tab=txt2img folders=['models\\wildcards']
                         list=0.00 thumb=0.00 desc=0.00 info=0.00 workers=12
23:51:02-489214 DEBUG    Networks: type="embedding" items=3 subfolders=1 tab=txt2img folders=['models\\embeddings']
                         list=0.01 thumb=0.00 desc=0.00 info=0.00 workers=12
23:51:02-490213 DEBUG    Networks: type="vae" items=0 subfolders=1 tab=txt2img folders=['models\\VAE'] list=0.00
                         thumb=0.00 desc=0.00 info=0.00 workers=12
23:51:02-492212 DEBUG    Networks: type="history" items=0 subfolders=1 tab=txt2img folders=[] list=0.00 thumb=0.00
                         desc=0.00 info=0.00 workers=12
23:51:02-604930 DEBUG    UI initialize: tab=img2img
23:51:03-042415 DEBUG    UI initialize: tab=control models="models\control"
23:51:03-268370 DEBUG    UI initialize: tab=video
23:51:03-347108 DEBUG    UI initialize: tab=process
23:51:03-373586 DEBUG    UI initialize: tab=caption
23:51:03-503931 DEBUG    UI initialize: tab=models
23:51:03-547738 DEBUG    UI initialize: tab=gallery
23:51:03-571199 DEBUG    Read: file="ui-config.json" json=0 bytes=2 time=0.000 fn=__init__:read_from_file
23:51:03-572201 DEBUG    UI initialize: tab=settings
23:51:03-576198 ERROR    Pipeline=HunyuanImage diffusers=0.36.0.dev0
                         path=C:\Users\winbbs\AppData\Local\Programs\Python\Python311\Lib\site-packages\diffusers\__init
                         __.py not available
23:51:03-577198 ERROR    Pipeline=Z-Image diffusers=0.36.0.dev0
                         path=C:\Users\winbbs\AppData\Local\Programs\Python\Python311\Lib\site-packages\diffusers\__init
                         __.py not available
23:51:03-578198 ERROR    Pipeline=LongCat diffusers=0.36.0.dev0
                         path=C:\Users\winbbs\AppData\Local\Programs\Python\Python311\Lib\site-packages\diffusers\__init
                         __.py not available
23:51:03-814838 DEBUG    Settings: sections=23 settings=387/601 quicksettings=1
23:51:03-856822 DEBUG    UI initialize: tab=info
23:51:03-869820 DEBUG    UI initialize: tab=extensions
23:51:03-872819 INFO     Extension List: No information found. Refresh required.
23:51:04-457539 DEBUG    Extension list: processed=4 installed=4 enabled=4 disabled=0 visible=4 hidden=0
23:51:04-942178 DEBUG    Root paths: ['C:\\ai\\sdnext', 'models']
23:51:05-043821 INFO     Local URL: http://127.0.0.1:7860/
23:51:05-045821 INFO     API docs: http://127.0.0.1:7860/docs
23:51:05-046820 INFO     API redocs: http://127.0.0.1:7860/redocs
23:51:05-049783 DEBUG    API middleware: [<class 'starlette.middleware.base.BaseHTTPMiddleware'>, <class
                         'starlette.middleware.gzip.GZipMiddleware'>]
23:51:05-050787 DEBUG    API initialize
23:51:05-200549 DEBUG    Scripts setup: time=0.210 []
23:51:05-201550 DEBUG    Model metadata: file="metadata.json" no changes
23:51:05-202549 INFO     Model: autoload=True selected="AnythingXL_xl [8421598e93]"
23:51:05-204553 DEBUG    Model requested: fn=threading.py:run:<lambda>
23:51:05-205551 DEBUG    Search model: name="AnythingXL_xl [8421598e93]"
                         matched="C:\ai\sdnext\models\Stable-diffusion\AnythingXL_xl.safetensors" type=alias
23:51:05-206226 INFO     Load model: select="AnythingXL_xl [8421598e93]"
23:51:05-208234 ERROR    Pipeline=HunyuanImage diffusers=0.36.0.dev0
                         path=C:\Users\winbbs\AppData\Local\Programs\Python\Python311\Lib\site-packages\diffusers\__init
                         __.py not available
23:51:05-208234 ERROR    Pipeline=Z-Image diffusers=0.36.0.dev0
                         path=C:\Users\winbbs\AppData\Local\Programs\Python\Python311\Lib\site-packages\diffusers\__init
                         __.py not available
23:51:05-209233 ERROR    Pipeline=LongCat diffusers=0.36.0.dev0
                         path=C:\Users\winbbs\AppData\Local\Programs\Python\Python311\Lib\site-packages\diffusers\__init
                         __.py not available
23:51:05-210238 INFO     Autodetect model: detect="Stable Diffusion XL" class=StableDiffusionXLPipeline
                         file="C:\ai\sdnext\models\Stable-diffusion\AnythingXL_xl.safetensors"
23:51:05-211234 DEBUG    Cache clear
Progress  1.24s/it █████████ 100% 7/7 00:08 00:00 Loading pipeline components...
23:51:15-152861 DEBUG    Setting model: pipeline=StableDiffusionXLPipeline config={'low_cpu_mem_usage': True,
                         'torch_dtype': torch.float32, 'load_connected_pipeline': True, 'extract_ema': False, 'config':
                         'configs/sdxl', 'use_safetensors': True, 'cache_dir': 'models\\huggingface'}
23:51:15-253376 INFO     Network load: type=embeddings loaded=0 skipped=3 time=0.08
23:51:15-255373 DEBUG    Setting model: component=vae {'slicing': True, 'tiling': False}
23:51:15-256372 DEBUG    Setting model: attention="Scaled-Dot-Product"
23:51:15-258372 DEBUG    Setting model: offload=none limit=0.0
23:51:15-260372 INFO     Model compile: pipeline=StableDiffusionXLPipeline mode=default backend=openvino_fx options=[]
                         compile=['Model', 'VAE', 'Upscaler', 'Control']
23:51:15-263371 DEBUG    Model compile: task=torch backends=['cudagraphs', 'inductor', 'onnxrt', 'openvino',
                         'openvino_fx', 'openxla', 'tvm']
23:51:15-342376 INFO     Model compile: task=torch time=0.08
23:51:16-040639 DEBUG    GC: current={'gpu': 0, 'ram': 23.47, 'oom': 0} prev={'gpu': 0, 'ram': 23.39} load={'gpu': 0,
                         'ram': 77} gc={'gpu': 0, 'py': 6548} fn=reload_model_weights:load_diffuser why=load time=0.70
23:51:16-043639 INFO     Load model: family=sdxl time={'total': 10.14, 'load': 9.97} native=1024 memory={'ram':
                         {'total': 30.52, 'rss': 11.96, 'used': 23.47, 'free': 7.05, 'avail': 7.05, 'buffers': 0,
                         'cached': 0}, 'gpu': {'total': 0, 'used': 0, 'error': 'Torch not compiled with CUDA enabled',
                         'swap': 0}, 'job': 'Load model'}
23:51:16-049150 DEBUG    Script init: ['system-info.py:app_started=0.08']
23:51:16-051152 INFO     Startup time: total=119.05 launch=23.69 loader=23.17 installer=23.17 torch=22.57
                         checkpoint=10.85 diffusers=4.39 gradio=3.54 libraries=3.20 ui-extensions=0.88 styles=0.81
                         ui-networks=0.56 upscalers=0.31 extensions=0.28 numpy=0.20 ui-control=0.15 ui-models=0.15
                         cv2=0.15 detailer=0.11
23:51:16-063653 DEBUG    Save: file="C:\ai\sdnext\config.json" json=28 bytes=1058 time=0.015
23:52:47-849024 DEBUG    UI: connected
23:52:47-852026 INFO     API user=None code=200 http/1.1 GET /sdapi/v1/version 127.0.0.1 0.004
23:53:16-114285 TRACE    Server: alive=True requests=8 memory=23.88/30.52 status='idle' task='' timestamp=None
                         current='' id='' job=0 jobs=0 total=2 step=0 steps=0 queued=0 uptime=137 elapsed=120.06
                         eta=None progress=0
23:53:42-816881 DEBUG    Control image: upload=<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1200x1200 at
                         0x1EB61667B50>
                         path="C:\Users\winbbs\AppData\Local\Temp\gradio\d919e383a72950679031ed7f535559c53091ad7b\00823-
                         2026-01-01-AnythingXL_xl.jpg"
23:54:01-493035 DEBUG    Kanvas: image=<PIL.Image.Image image mode=RGB size=1519x1200 at 0x1EB61309010>:1961430
                         mask=None:0 time=0.03
23:54:01-495040 DEBUG    Select input: type=Kanvas source=[<PIL.Image.Image image mode=RGB size=1519x1200 at
                         0x1EB61309010>] init=None mask=None mode=Kanvas time=0.03
23:54:04-288171 DEBUG    Control Processor loaded: id="OpenPose" class=OpenposeDetector time=1.89
23:54:04-289170 DEBUG    Control ControlNet model loading: id="OpenPose XL"
                         path="thibaud/controlnet-openpose-sdxl-1.0/bin"
23:54:04-959644 DEBUG    Control compile: task=torch backends=['cudagraphs', 'inductor', 'onnxrt', 'openvino',
                         'openvino_fx', 'openxla', 'tvm']
23:54:04-979645 INFO     Control compile: task=torch time=0.02
23:54:04-981644 INFO     Control ControlNet model loaded: id="OpenPose XL" path="thibaud/controlnet-openpose-sdxl-1.0"
                         cls=ControlNetModel time=0.69
23:54:04-982644 DEBUG    Control unit: i=1 type=ControlNet process="OpenPose" model="OpenPose XL" strength=1 guess=False
                         start=0 end=1 mode=default
23:54:04-984644 DEBUG    Setting model: offload=none limit=0.0
23:54:04-985644 DEBUG    Setting model: component=vae {'slicing': True, 'tiling': False}
23:54:04-985644 DEBUG    Setting model: attention="Scaled-Dot-Product"
23:54:04-986644 DEBUG    Setting model: offload=none limit=0.0
23:54:04-997456 DEBUG    Image resize: source=1200:1200 target=1519:1200 mode="Fixed" upscaler="None" type=image
                         time=0.01 fn=preprocess_image:__call__
23:54:05-576971 DEBUG    Control Processor: id="OpenPose" mode=RGB args={'include_body': True, 'include_hand': False,
                         'include_face': False} time=0.58
23:54:05-581973 DEBUG    Pipeline class change: original=StableDiffusionXLControlNetPipeline
                         target=StableDiffusionXLControlNetImg2ImgPipeline device=cpu fn=control_run:preprocess_image
23:54:05-603971 INFO     AnimateDiff load: adapter="a-r-r-o-w/animatediff-motion-adapter-sdxl-beta"
23:54:06-856725 DEBUG    Setting model: attention="Scaled-Dot-Product"
23:54:06-859725 DEBUG    Setting adapter: offload=none limit=0.0
23:54:06-868725 DEBUG    Setting model: component=vae {'slicing': True, 'tiling': False}
23:54:06-869725 DEBUG    Setting model: attention="Scaled-Dot-Product"
23:54:06-870725 DEBUG    Setting model: offload=none limit=0.0
23:54:06-871725 DEBUG    AnimateDiff: adapter="a-r-r-o-w/animatediff-motion-adapter-sdxl-beta"
23:54:06-873725 DEBUG    AnimateDiff: scheduler=DDIMScheduler
23:54:06-874726 DEBUG    AnimateDiff args: {'controlnet_conditioning_scale': 1.0, 'control_guidance_start': 0.0,
                         'control_guidance_end': 1.0, 'guess_mode': False, 'generator': None, 'num_frames': 16,
                         'num_inference_steps': 20, 'output_type': 'np'}
23:54:06-875726 DEBUG    AnimateDiff prompt: masterpiece, front view, 1girl, standing, flower field, The dress covers
                         the shoulders with a white flower, and the chest and belly are hidden under a pink fabric,
                         skirt resembles like is bud and each edge has pointed, translucent pink fabric covers her head,
                         purple eyes, blue hair, upper body close to the camera, occupying the foreground
23:54:06-891725 DEBUG    Sampler: "Default" cls=EulerAncestralDiscreteScheduler config={'num_train_timesteps': 1000,
                         'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'trained_betas':
                         None, 'prediction_type': 'epsilon', 'timestep_spacing': 'trailing', 'steps_offset': 1,
                         'rescale_betas_zero_snr': False, 'interpolation_type': 'linear', 'use_karras_sigmas': False,
                         'clip_sample': False, 'sample_max_value': 1.0, 'set_alpha_to_one': False, 'skip_prk_steps':
                         True}
23:54:06-971234 INFO     Processing modifiers: apply
23:54:06-994234 DEBUG    Setting model: offload=none limit=0.0
23:54:07-185761 INFO     Base: pipeline=AnimateDiffSDXLPipeline task=TEXT_2_IMAGE batch=1/1x1 set={'prompt': 340,
                         'negative_prompt': 147, 'guidance_scale': 6, 'generator': 'cpu:[1585077217]',
                         'num_inference_steps': 20, 'eta': 1.0, 'guidance_rescale': 0, 'denoising_end': None,
                         'num_frames': 16, 'output_type': 'np', 'parser': 'fixed'}
Progress ?it/s                                              0% 0/20 00:01 ? Base
23:54:09-493079 ERROR    Processing: step=base args={'prompt': 'masterpiece, front view, 1girl, standing, flower field,
                         The dress covers the shoulders with a white flower, and the chest and belly are hidden under a
                         pink fabric, skirt resembles like is bud and each edge has pointed, translucent pink fabric
                         covers her head, purple eyes, blue hair, upper body close to the camera, occupying the
                         foreground', 'negative_prompt': 'happy expression, clean clothes, simple background, low
                         quality, blurry, distorted anatomy, FastNegativeV2, easynegative, verybadimagenegative_v1.3',
                         'guidance_scale': 6, 'generator': None, 'callback_on_step_end': <function diffusers_callback at
                         0x000001EB5E689620>, 'callback_on_step_end_tensor_inputs': ['latents', 'prompt_embeds',
                         'negative_prompt_embeds', 'add_text_embeds', 'add_time_ids', 'negative_pooled_prompt_embeds',
                         'negative_add_time_ids'], 'num_inference_steps': 20, 'eta': 1.0, 'guidance_rescale': 0,
                         'denoising_end': None, 'num_frames': 16, 'output_type': 'np'} Expected 3D (unbatched) or 4D
                         (batched) input to conv2d, but got input of size: [2, 4, 16, 128, 128]
23:54:09-496079 ERROR    Processing: RuntimeError
╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│C:\ai\sdnext\modules\processing_diffusers.py:180 in process_base                                                      │
│                                                                                                                      │
│  179 │   │   │   taskid = shared.state.begin('Inference')                                                            │
│❱ 180 │   │   │   output = shared.sd_model(**base_args)                                                               │
│  181 │   │   │   shared.state.end(taskid)                                                                            │
│                                                                                                                      │
│C:\Users\winbbs\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\utils\_contextlib.py:116 in decorate_ │
│                                                                                                                      │
│  115 │   │   with ctx_factory():                                                                                     │
│❱ 116 │   │   │   return func(*args, **kwargs)                                                                        │
│  117                                                                                                                 │
│                                                                                                                      │
│C:\Users\winbbs\AppData\Local\Programs\Python\Python311\Lib\site-packages\diffusers\pipelines\animatediff\pipeline_an │
│                                                                                                                      │
│  1239 │   │   │   │   │                                                                                              │
│❱ 1240 │   │   │   │   │   noise_pred = self.unet(                                                                    │
│  1241 │   │   │   │   │   │   latent_model_input,                                                                    │
│                                                                                                                      │
│C:\Users\winbbs\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py:1739 in _wrapped │
│                                                                                                                      │
│  1738 │   │   else:                                                                                                  │
│❱ 1739 │   │   │   return self._call_impl(*args, **kwargs)                                                            │
│  1740                                                                                                                │
│                                                                                                                      │
│C:\Users\winbbs\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py:1750 in _call_im │
│                                                                                                                      │
│  1749 │   │   │   │   or _global_forward_hooks or _global_forward_pre_hooks):                                        │
│❱ 1750 │   │   │   return forward_call(*args, **kwargs)                                                               │
│  1751                                                                                                                │
│                                                                                                                      │
│C:\Users\winbbs\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\_dynamo\eval_frame.py:574 in _fn      │
│                                                                                                                      │
│   573 │   │   │   try:                                                                                               │
│❱  574 │   │   │   │   return fn(*args, **kwargs)                                                                     │
│   575 │   │   │   finally:                                                                                           │
│                                                                                                                      │
│C:\Users\winbbs\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py:1739 in _wrapped │
│                                                                                                                      │
│  1738 │   │   else:                                                                                                  │
│❱ 1739 │   │   │   return self._call_impl(*args, **kwargs)                                                            │
│  1740                                                                                                                │
│                                                                                                                      │
│C:\Users\winbbs\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py:1750 in _call_im │
│                                                                                                                      │
│  1749 │   │   │   │   or _global_forward_hooks or _global_forward_pre_hooks):                                        │
│❱ 1750 │   │   │   return forward_call(*args, **kwargs)                                                               │
│  1751                                                                                                                │
│                                                                                                                      │
│C:\Users\winbbs\AppData\Local\Programs\Python\Python311\Lib\site-packages\diffusers\models\unets\unet_2d_condition.py │
│                                                                                                                      │
│  1167 │   │   # 2. pre-process                                                                                       │
│❱ 1168 │   │   sample = self.conv_in(sample)                                                                          │
│  1169                                                                                                                │
│                                                                                                                      │
│C:\Users\winbbs\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py:1739 in _wrapped │
│                                                                                                                      │
│  1738 │   │   else:                                                                                                  │
│❱ 1739 │   │   │   return self._call_impl(*args, **kwargs)                                                            │
│  1740                                                                                                                │
│                                                                                                                      │
│C:\Users\winbbs\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py:1750 in _call_im │
│                                                                                                                      │
│  1749 │   │   │   │   or _global_forward_hooks or _global_forward_pre_hooks):                                        │
│❱ 1750 │   │   │   return forward_call(*args, **kwargs)                                                               │
│  1751                                                                                                                │
│                                                                                                                      │
│C:\Users\winbbs\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\conv.py:554 in forward     │
│                                                                                                                      │
│   553 │   def forward(self, input: Tensor) -> Tensor:                                                                │
│❱  554 │   │   return self._conv_forward(input, self.weight, self.bias)                                               │
│   555                                                                                                                │
│                                                                                                                      │
│C:\Users\winbbs\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\conv.py:549 in _conv_forwa │
│                                                                                                                      │
│   548 │   │   │   )                                                                                                  │
│❱  549 │   │   return F.conv2d(                                                                                       │
│   550 │   │   │   input, weight, bias, self.stride, self.padding, self.dilation, self.groups                         │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: Expected 3D (unbatched) or 4D (batched) input to conv2d, but got input of size: [2, 4, 16, 128, 128]
23:54:09-912135 DEBUG    Search model: name="AnythingXL_xl [8421598e93]"
                         matched="C:\ai\sdnext\models\Stable-diffusion\AnythingXL_xl.safetensors" type=alias
23:54:09-929138 DEBUG    Analyzed: model="AnythingXL_xl" type=AnimateDiffSDXLPipeline class=AnimateDiffSDXLPipeline
                         size=6938041578 mtime="2025-07-03 23:36:51" modules=[name="vae" cls=AutoencoderKL config=True
                         device=cpu dtype=torch.float32 params=83653863 modules=243, name="text_encoder"
                         cls=CLIPTextModel config=True device=cpu dtype=torch.float32 params=123060480 modules=152,
                         name="text_encoder_2" cls=CLIPTextModelWithProjection config=True device=cpu
                         dtype=torch.float32 params=694659840 modules=393, name="tokenizer" cls=CLIPTokenizer
                         config=False, name="tokenizer_2" cls=CLIPTokenizer config=False, name="unet"
                         cls=OptimizedModule config=True device=cpu dtype=torch.float32 params=2567463684 modules=1931,
                         name="motion_adapter" cls=MotionAdapter config=True device=cpu dtype=torch.float32
                         params=236779200 modules=465, name="scheduler" cls=EulerAncestralDiscreteScheduler config=True,
                         name="image_encoder" cls=NoneType config=False, name="feature_extractor" cls=NoneType
                         config=False, name="force_zeros_for_empty_prompt" cls=bool config=False]
23:54:09-931293 INFO     Processing modifiers: unapply
23:54:09-939296 DEBUG    Process: batch=1/1 interrupted
23:54:09-943800 INFO     Processed: images=0 its=0.00 ops=['img2img', 'control', 'video']
23:54:09-944806 DEBUG    Processed: timers={'total': 3.07, 'post': 2.95}
23:54:09-945803 DEBUG    Processed: memory={'ram': {'total': 30.52, 'rss': 13.6, 'used': 25.57, 'free': 4.95, 'avail':
                         4.95, 'buffers': 0, 'cached': 0}, 'gpu': {'total': 0, 'used': 0, 'error': 'Torch not compiled
                         with CUDA enabled', 'swap': 0}, 'job': ''}
23:54:09-947056 DEBUG    AnimateDiff video: type=MP4/MP4V duration=2 loop=True pad=1 interpolate=0

Backend

Diffusers

Compute

OpenVINO

Interface

Standard

Branch

Dev

Model

StableDiffusion 1.5/2.1/XL

Acknowledgements

  • I have read the above and searched for existing issues
  • I confirm that this is classified correctly and its not an extension issue

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions