Stability and additional improvements
App
Added
- Added a possibility to set up basic authentication for Lightning apps (#16105)
Changed
- The LoadBalancer now uses internal ip + port instead of URL exposed (#16119)
- Added support for logging in different trainer stages with
DeviceStatsMonitor
(#16002) - Changed
lightning_app.components.serve.gradiotolightning_app.components.serve.gradio_server(#16201) - Made cluster creation/deletion async by default (#16185)
Fixed
- Fixed not being able to run multiple lightning apps locally due to port collision (#15819)
- Avoid
relpathbug on Windows (#16164) - Avoid using the deprecated
LooseVersion(#16162) - Porting fixes to autoscaler component (#16249)
- Fixed a bug where
lightning loginwith env variables would not correctly save the credentials (#16339)
Fabric
Added
- Added
Fabric.launch()to programmatically launch processes (e.g. in Jupyter notebook) (#14992) - Added the option to launch Fabric scripts from the CLI, without the need to wrap the code into the
runmethod (#14992) - Added
Fabric.setup_module()andFabric.setup_optimizers()to support strategies that need to set up the model before an optimizer can be created (#15185) - Added support for Fully Sharded Data Parallel (FSDP) training in Lightning Lite (#14967)
- Added
lightning_fabric.accelerators.find_usable_cuda_devicesutility function (#16147) - Added basic support for LightningModules (#16048)
- Added support for managing callbacks via
Fabric(callbacks=...)and emitting events throughFabric.call()(#16074) - Added Logger support (#16121)
- Added
Fabric(loggers=...)to support different Logger frameworks in Fabric - Added
Fabric.logfor logging scalars using multiple loggers - Added
Fabric.log_dictfor logging a dictionary of multiple metrics at once - Added
Fabric.loggersandFabric.loggerattributes to access the individual logger instances - Added support for calling
self.logandself.log_dictin a LightningModule when using Fabric - Added access to
self.loggerandself.loggersin a LightningModule when using Fabric
- Added
- Added
lightning_fabric.loggers.TensorBoardLogger(#16121) - Added
lightning_fabric.loggers.CSVLogger(#16346) - Added support for a consistent
.zero_grad(set_to_none=...)on the wrapped optimizer regardless of which strategy is used (#16275)
Changed
- Renamed the class
LightningLitetoFabric(#15932, #15938) - The
Fabric.run()method is no longer abstract (#14992) - The
XLAStrategynow inherits fromParallelStrategyinstead ofDDPSpawnStrategy(#15838) - Merged the implementation of
DDPSpawnStrategyintoDDPStrategyand removedDDPSpawnStrategy(#14952) - The dataloader wrapper returned from
.setup_dataloaders()now calls.set_epoch()on the distributed sampler if one is used (#16101) - Renamed
Strategy.reducetoStrategy.all_reducein all strategies (#16370) - When using multiple devices, the strategy now defaults to "ddp" instead of "ddp_spawn" when none is set (#16388)
Removed
- Removed support for FairScale's sharded training (
strategy='ddp_sharded'|'ddp_sharded_spawn'). Use Fully-Sharded Data Parallel instead (strategy='fsdp') (#16329)
Fixed
- Restored sampling parity between PyTorch and Fabric dataloaders when using the
DistributedSampler(#16101) - Fixes an issue where the error message wouldn't tell the user the real value that was passed through the CLI (#16334)
PyTorch
Added
- Added support for native logging of
MetricCollectionwith enabled compute groups (#15580) - Added support for custom artifact names in
pl.loggers.WandbLogger(#16173) - Added support for DDP with
LRFinder(#15304) - Added utilities to migrate checkpoints from one Lightning version to another (#15237)
- Added support to upgrade all checkpoints in a folder using the
pl.utilities.upgrade_checkpointscript (#15333) - Add an axes argument
axto the.lr_find().plot()to enable writing to a user-defined axes in a matplotlib figure (#15652) - Added
log_modelparameter toMLFlowLogger(#9187) - Added a check to validate that wrapped FSDP models are used while initializing optimizers (#15301)
- Added a warning when
self.log(..., logger=True)is called without a configured logger (#15814) - Added support for colossalai 0.1.11 (#15888)
- Added
LightningCLIsupport for optimizer and learning schedulers via callable type dependency injection (#15869) - Added support for activation checkpointing for the
DDPFullyShardedNativeStrategystrategy (#15826) - Added the option to set
DDPFullyShardedNativeStrategy(cpu_offload=True|False)via bool instead of needing to pass a configuration object (#15832) - Added info message for Ampere CUDA GPU users to enable tf32 matmul precision (#16037)
- Added support for returning optimizer-like classes in
LightningModule.configure_optimizers(#16189)
Changed
- Switch from
tensorboardtotensorboardxinTensorBoardLogger(#15728) - From now on, Lightning Trainer and
LightningModule.load_from_checkpointautomatically upgrade the loaded checkpoint if it was produced in an old version of Lightning (#15237) Trainer.{validate,test,predict}(ckpt_path=...)no longer restores theTrainer.global_stepandtrainer.current_epochvalue from the checkpoints - From now on, onlyTrainer.fitwill restore this value (#15532)- The
ModelCheckpoint.save_on_train_epoch_endattribute is now computed dynamically every epoch, accounting for changes to the validation dataloaders (#15300) - The Trainer now raises an error if it is given multiple stateful callbacks of the same time with colliding state keys (#15634)
MLFlowLoggernow logs hyperparameters and metrics in batched API calls (#15915)- Overriding the
on_train_batch_{start,end}hooks in conjunction with taking adataloader_iterin thetraining_stepno longer errors out and instead shows a warning (#16062) - Move
tensorboardXto extra dependencies. Use theCSVLoggerby default (#16349) - Drop PyTorch 1.9 support (#15347)
Deprecated
- Deprecated
description,env_prefixandenv_parseparameters inLightningCLI.__init__in favour of giving them throughparser_kwargs(#15651) - Deprecated
pytorch_lightning.profilerin favor ofpytorch_lightning.profilers(#16059) - Deprecated
Trainer(auto_select_gpus=...)in favor ofpytorch_lightning.accelerators.find_usable_cuda_devices(#16147) - Deprecated
pytorch_lightning.tuner.auto_gpu_select.{pick_single_gpu,pick_multiple_gpus}in favor ofpytorch_lightning.accelerators.find_usable_cuda_devices(#16147) nvidia/apexdeprecation (#16039)- Deprecated
pytorch_lightning.plugins.NativeMixedPrecisionPluginin favor ofpytorch_lightning.plugins.MixedPrecisionPlugin - Deprecated the
LightningModule.optimizer_step(using_native_amp=...)argument - Deprecated the
Trainer(amp_backend=...)argument - Deprecated the
Trainer.amp_backendproperty - Deprecated the
Trainer(amp_level=...)argument - Deprecated the
pytorch_lightning.plugins.ApexMixedPrecisionPluginclass - Deprecates the
pytorch_lightning.utilities.enums.AMPTypeenum - Deprecates the
DeepSpeedPrecisionPlugin(amp_type=..., amp_level=...)arguments
- Deprecated
horovoddeprecation (#16141)- Deprecated
Trainer(strategy="horovod") - Deprecated the
HorovodStrategyclass
- Deprecated
- Deprecated
pytorch_lightning.lite.LightningLitein favor oflightning.fabric.Fabric(#16314) FairScaledeprecation (in favor of PyTorch's FSDP implementation) (#16353)- Deprecated the
pytorch_lightning.overrides.fairscale.LightningShardedDataParallelclass - Deprecated the
pytorch_lightning.plugins.precision.fully_sharded_native_amp.FullyShardedNativeMixedPrecisionPluginclass - Deprecated the
pytorch_lightning.plugins.precision.sharded_native_amp.ShardedNativeMixedPrecisionPluginclass - Deprecated the
pytorch_lightning.strategies.fully_sharded.DDPFullyShardedStrategyclass - Deprecated the
pytorch_lightning.strategies.sharded.DDPShardedStrategyclass - Deprecated the
pytorch_lightning.strategies.sharded_spawn.DDPSpawnShardedStrategyclass
- Deprecated the
Removed
- Removed deprecated
pytorch_lightning.utilities.memory.get_gpu_memory_mapin favor ofpytorch_lightning.accelerators.cuda.get_nvidia_gpu_stats(#15617) - Temporarily removed support for Hydra multi-run (#15737)
- Removed deprecated
pytorch_lightning.profiler.base.AbstractProfilerin favor ofpytorch_lightning.profilers.profiler.Profiler(#15637) - Removed deprecated
pytorch_lightning.profiler.base.BaseProfilerin favor ofpytorch_lightning.profilers.profiler.Profiler(#15637) - Removed deprecated code in
pytorch_lightning.utilities.meta(#16038) - Removed the deprecated
LightningDeepSpeedModule(#16041) - Removed the deprecated
pytorch_lightning.accelerators.GPUAcceleratorin favor ofpytorch_lightning.accelerators.CUDAAccelerator(#16050) - Removed the deprecated
pytorch_lightning.profiler.*classes in favor ofpytorch_lightning.profilers(#16059) - Removed the deprecated
pytorch_lightning.utilities.climodule in favor ofpytorch_lightning.cli(#16116) - Removed the deprecated
pytorch_lightning.loggers.basemodule in favor ofpytorch_lightning.loggers.logger(#16120) - Removed the deprecated
pytorch_lightning.loops.basemodule in favor ofpytorch_lightning.loops.loop(#16142) - Removed the deprecated
pytorch_lightning.core.lightningmodule in favor ofpytorch_lightning.core.module(#16318) - Removed the deprecated
pytorch_lightning.callbacks.basemodule in favor ofpytorch_lightning.callbacks.callback(#16319) - Removed the deprecated
Trainer.reset_train_val_dataloaders()in favor ofTrainer.reset_{train,val}_dataloader(#16131) - Removed support for
LightningCLI(seed_everything_default=None)(#16131) - Removed support in LightningLite for FairScale's sharded training (
strategy='ddp_sharded'|'ddp_sharded_spawn'). Use Fully-Sharded Data Parallel instead (strategy='fsdp') (#16329)
Fixed
- Enhanced
reduce_boolean_decisionto accommodateany-analogous semantics expected by theEarlyStoppingcallback (#15253) - Fixed the incorrect optimizer step synchronization when running across multiple TPU devices (#16020)
- Fixed a type error when dividing the chunk size in the ColossalAI strategy (#16212)
- Fixed bug where the
intervalkey of the scheduler would be ignored during manual optimization, making the LearningRateMonitor callback fail to log the learning rate (#16308) - Fixed an issue with
MLFlowLoggernot finalizing correctly when status code 'finished' was passed (#16340)
Contributors
@1SAA, @akihironitta, @AlessioQuercia, @awaelchli, @bipinKrishnan, @Borda, @carmocca, @dmitsf, @erhoo82, @ethanwharris, @Forbu, @hhsecond, @justusschock, @lantiga, @lightningforever, @Liyang90, @manangoel99, @mauvilsa, @nicolai86, @nohalon, @rohitgr7, @schmidt-jake, @speediedan, @yMayanand
If we forgot someone due to not matching commit email with GitHub account, let us know :]