You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fix LR not being correctly set after using LearningRateFinder callback (#21068)
* fix(tuner/lr_finder): apply LR suggestion after checkpoint restore when used as callback
Previously, LearningRateFinder applied the suggested LR before restoring the
checkpoint, so the optimizer LR was reverted by the restore step. This caused
the callback to print “Learning rate set to …” without persisting the change.
Change:
- Move LR application to after checkpoint restore and update both the LM attr
and active optimizer param groups so the LR persists for training.
Tests:
- Add unit test [test_lr_finder_callback_applies_lr_after_restore] to assert the
optimizer LR matches the LR Finder suggestion after the search completes.
* changelog
* Apply suggestions from code review
---------
Co-authored-by: Nicki Skafte Detlefsen <[email protected]>
Co-authored-by: Jirka Borovec <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Copy file name to clipboardExpand all lines: src/lightning/pytorch/CHANGELOG.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,6 +28,8 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
28
28
-
29
29
30
30
31
+
- Fixed learning rate not being correctly set after using `LearningRateFinder` callback ([#21068](https://github.com/Lightning-AI/pytorch-lightning/pull/21068))
0 commit comments