standard weekly patch release
Detail changes
Added
- Added PyTorch 1.7 Stable support (#3821)
- Added timeout for
tpu_device_existsto ensure process does not hang indefinitely (#4340)
Changed
- W&B log in sync with
Trainerstep (#4405) - Hook
on_after_backwardis called only whenoptimizer_stepis being called (#4439) - Moved
track_and_norm_gradintotraining loopand called only whenoptimizer_stepis being called (#4439) - Changed type checker with explicit cast of ref_model object (#4457)
Deprecated
- Deprecated passing
ModelCheckpointinstance tocheckpoint_callbackTrainer argument (#4336)
Fixed
- Disable saving checkpoints if not trained (#4372)
- Fixed error using
auto_select_gpus=Truewithgpus=-1(#4209) - Disabled training when
limit_train_batches=0(#4371) - Fixed that metrics do not store computational graph for all seen data (#4313)
- Fixed AMP unscale for
on_after_backward(#4439) - Fixed TorchScript export when module includes Metrics (#4428)
- Fixed CSV logger warning (#4419)
- Fixed skip DDP parameter sync (#4301)
Contributors
@ananthsub, @awaelchli, @borisdayma, @carmocca, @justusschock, @lezwon, @rohitgr7, @SeanNaren, @SkafteNicki, @ssaru, @tchaton, @ydcjeff
If we forgot someone due to not matching commit email with GitHub account, let us know :]