Skip to content

谢谢作者的时间,我运行了代码后出现这样的错误 #8

@yangtutuaka

Description

@yangtutuaka

(base) C:\YCRS_DATA\YCR_Code\pytorch-saltnet-master>python train.py --vtf --pretrained imagenet --loss-on-center --batch-size 32 --optim adamw --learning-rate 5e-4 --lr-scheduler noam --basenet senet154 --max-epochs 250 --data-fold fold0 --log-dir runs/fold0 --resume runs/fold0/checkpoints/last-checkpoint-fold0.pth
Load dataset list_train0_3600: 100%|█████████████████████████████████████████| 3599/3599 [00:03<00:00, 1018.55images/s]
Load dataset list_valid0_400: 100%|█████████████████████████████████████████████| 399/399 [00:00<00:00, 974.61images/s]
Load dataset list_valid0_400: 100%|█████████████████████████████████████████████| 399/399 [00:00<00:00, 994.73images/s]
use cuda
N of parameters 827
resuming a checkpoint 'runs/fold0/checkpoints/last-checkpoint-fold0.pth'

Warning the checkpoint 'runs/fold0/checkpoints/last-checkpoint-fold0.pth' doesn't exist! training from scratch!

logging into runs/fold0
training unet...
0%| | 0/250 [00:00<?, ?it/s]C:\Users\ChenRui.Yang\anaconda3\lib\site-packages\torch\optim\lr_scheduler.py:131: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
warnings.warn("Detected call of lr_scheduler.step() before optimizer.step(). "****

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions