Simplifications & new docs
This release focused on a ton of bug fixes, small optimizations to training but most importantly, clean new docs!
Major changes
We have released New documentation, please bear with us as we fix broken links and patch in missing pieces.
This project moved to new org PyTorchLightning, so no longer the root sits on WilliamFalcon/PyTorchLightning.
We have added own custom Tensorboard logger as default logger.
We have upgrade Continues Integration to speed up the automatic testing.
We have fixed GAN training - supporting multiple optimizers.
Complete changelog
Added
- Added support for resuming from a specific checkpoint via
resume_from_checkpointargument (#516) - Added support for
ReduceLROnPlateauscheduler (#320) - Added support for Apex mode
O2in conjunction with Data Parallel (#493) - Added option (
save_top_k) to save the top k models in theModelCheckpointclass (#128) - Added
on_train_startandon_train_endhooks toModelHooks(#598) - Added
TensorBoardLogger(#607) - Added support for weight summary of model with multiple inputs (#543)
- Added
map_locationargument toload_from_metricsandload_from_checkpoint(#625) - Added option to disable validation by setting
val_percent_check=0(#649) - Added
NeptuneLoggerclass (#648) - Added
WandbLoggerclass (#627)
Changed
- Changed the default progress bar to print to stdout instead of stderr (#531)
- Renamed
step_idxtostep,epoch_idxtoepoch,max_num_epochstomax_epochsandmin_num_epochstomin_epochs(#589) - Renamed several
Traineratributes: (#567)total_batch_nbtototal_batches,nb_val_batchestonum_val_batches,nb_training_batchestonum_training_batches,max_nb_epochstomax_epochs,min_nb_epochstomin_epochs,nb_test_batchestonum_test_batches,- and
nb_val_batchestonum_val_batches(#567)
- Changed gradient logging to use parameter names instead of indexes (#660)
- Changed the default logger to
TensorBoardLogger(#609) - Changed the directory for tensorboard logging to be the same as model checkpointing (#706)
Deprecated
- Deprecated
max_nb_epochsandmin_nb_epochs(#567) - Deprecated the
on_sanity_check_starthook inModelHooks(#598)
Removed
- Removed the
save_best_onlyargument fromModelCheckpoint, usesave_top_k=1instead (#128)
Fixed
- Fixed a bug which ocurred when using Adagrad with cuda (#554)
- Fixed a bug where training would be on the GPU despite setting
gpus=0orgpus=[](#561) - Fixed an error with
print_nan_gradientswhen some parameters do not require gradient (#579) - Fixed a bug where the progress bar would show an incorrect number of total steps during the validation sanity check when using multiple validation data loaders (#597)
- Fixed support for PyTorch 1.1.0 (#552)
- Fixed an issue with early stopping when using a
val_check_interval < 1.0inTrainer(#492) - Fixed bugs relating to the
CometLoggerobject that would cause it to not work properly (#481) - Fixed a bug that would occur when returning
-1fromon_batch_startfollowing an early exit or when the batch wasNone(#509) - Fixed a potential race condition with several processes trying to create checkpoint directories (#530)
- Fixed a bug where batch 'segments' would remain on the GPU when using
truncated_bptt > 1(#532) - Fixed a bug when using
IterableDataset(#547](#547)) - Fixed a bug where
.itemwas called on non-tensor objects (#602) - Fixed a bug where
Trainer.trainwould crash on an uninitialized variable if the trainer was run after resuming from a checkpoint that was already atmax_epochs(#608) - Fixed a bug where early stopping would begin two epochs early (#617)
- Fixed a bug where
num_training_batchesandnum_test_batcheswould sometimes be rounded down to zero (#649) - Fixed a bug where an additional batch would be processed when manually setting
num_training_batches(#653) - Fixed a bug when batches did not have a
.copymethod (#701) - Fixed a bug when using
log_gpu_memory=Truein Python 3.6 (#715) - Fixed a bug where checkpoint writing could exit before completion, giving incomplete checkpoints (#689)
- Fixed a bug where
on_train_endwas not called when early stopping (#723)
Contributors
@akhti, @alumae, @awaelchli, @Borda, @borisdayma, @ctlaltdefeat, @dreamgonfly, @elliotwaite, @fdiehl, @goodok, @haossr, @HarshSharma12, @Ir1d, @jakubczakon, @jeffling, @kuynzereb, @MartinPernus, @matthew-z, @MikeScarp, @mpariente, @neggert, @rwesterman, @ryanwongsa, @schwobr, @tullie, @vikmary, @VSJMilewski, @williamFalcon, @YehCF
If we forgot someone due to not matching commit email with GitHub account, let us know :]