Skip to content

Commit cb460da

Browse files
docs: add LR scheduler section with automatic optimization examples (#21559)
* docs: add LR scheduler section with automatic optimization examples * docs: fix extra blank lines and cross-reference link * update * update --------- Co-authored-by: deependujha <deependujha21@gmail.com>
1 parent 9dc525c commit cb460da

File tree

1 file changed

+57
-0
lines changed

1 file changed

+57
-0
lines changed

docs/source-pytorch/common/optimization.rst

Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -56,6 +56,63 @@ Should you still require the flexibility of calling ``.zero_grad()``, ``.backwar
5656
always switch to :ref:`manual optimization <manual_optimization>`.
5757
Manual optimization is required if you wish to work with multiple optimizers.
5858

59+
.. _lr_scheduling:
60+
61+
Learning Rate Scheduling
62+
========================
63+
64+
Lightning supports learning rate schedulers configured via :meth:`~lightning.pytorch.core.LightningModule.configure_optimizers`.
65+
In **automatic optimization**, Lightning will call ``scheduler.step()`` for you automatically —
66+
you do not need to call it manually.
67+
68+
A simple example returning both an optimizer and a scheduler:
69+
70+
.. code-block:: python
71+
72+
def configure_optimizers(self):
73+
optimizer = torch.optim.Adam(self.parameters(), lr=1e-3)
74+
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1)
75+
return {
76+
"optimizer": optimizer,
77+
"lr_scheduler": {
78+
"scheduler": scheduler,
79+
"interval": "epoch", # "epoch" (default) or "step"
80+
"frequency": 1, # how often to call scheduler.step(); default is 1
81+
},
82+
}
83+
84+
The ``interval`` and ``frequency`` keys control when ``scheduler.step()`` is called:
85+
86+
.. list-table::
87+
:header-rows: 1
88+
:widths: 15 15 70
89+
90+
* - ``interval``
91+
- ``frequency``
92+
- Behavior
93+
* - ``"epoch"`` (default)
94+
- 1 (default)
95+
- ``scheduler.step()`` is called once at the end of every epoch
96+
* - ``"epoch"``
97+
- N
98+
- ``scheduler.step()`` is called at the end of every N epochs
99+
* - ``"step"``
100+
- 1 (default)
101+
- ``scheduler.step()`` is called after every training batch (step)
102+
* - ``"step"``
103+
- N
104+
- ``scheduler.step()`` is called after every N training steps
105+
106+
.. note::
107+
If ``interval`` and ``frequency`` are not specified, Lightning defaults to
108+
``interval="epoch"`` and ``frequency=1``, stepping the scheduler once per epoch.
109+
110+
.. note::
111+
If you are using **manual optimization**, Lightning will **not** call ``scheduler.step()``
112+
automatically. You are responsible for stepping the scheduler yourself inside
113+
``training_step()`` or ``on_train_epoch_end()`` at the appropriate point.
114+
115+
For the full list of supported return formats, see :meth:`~lightning.pytorch.core.LightningModule.configure_optimizers`.
59116

60117
.. _gradient_accumulation:
61118

0 commit comments

Comments
 (0)