File tree Expand file tree Collapse file tree 3 files changed +20
-0
lines changed Expand file tree Collapse file tree 3 files changed +20
-0
lines changed Original file line number Diff line number Diff line change @@ -324,6 +324,7 @@ tensorboard --logdir /some/path
324324
325325- [ Accumulate gradients] ( https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#accumulated-gradients )
326326- [ Force training for min or max epochs] ( https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#force-training-for-min-or-max-epochs )
327+ - [ Early stopping] ( https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#early-stopping )
327328- [ Force disable early stop] ( https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#force-disable-early-stop )
328329- [ Gradient Clipping] ( https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#gradient-clipping )
329330- [ Hooks] ( https://williamfalcon.github.io/pytorch-lightning/Trainer/hooks/ )
Original file line number Diff line number Diff line change @@ -19,6 +19,24 @@ It can be useful to force training for a minimum number of epochs or limit to a
1919trainer = Trainer(min_nb_epochs=1, max_nb_epochs=1000)
2020```
2121
22+ ---
23+ #### Early stopping
24+ To enable ealry-stopping, define the callback and give it to the trainer.
25+ ``` {.python}
26+ from pytorch_lightning.callbacks import EarlyStopping
27+
28+ # DEFAULTS
29+ early_stop_callback = EarlyStopping(
30+ monitor='val_loss',
31+ min_delta=0.00,
32+ patience=0,
33+ verbose=False,
34+ mode='auto'
35+ )
36+
37+ trainer = Trainer(early_stop_callback=early_stop_callback)
38+ ```
39+
2240---
2341#### Force disable early stop
2442Use this to turn off early stopping and run training to the [ max_epoch] ( #force-training-for-min-or-max-epochs )
Original file line number Diff line number Diff line change @@ -120,6 +120,7 @@ Notice a few things about this flow:
120120
121121- [ Accumulate gradients] ( https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#accumulated-gradients )
122122- [ Force training for min or max epochs] ( https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#force-training-for-min-or-max-epochs )
123+ - [ Early stopping] ( https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#early-stopping )
123124- [ Force disable early stop] ( https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#force-disable-early-stop )
124125- [ Gradient Clipping] ( https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#gradient-clipping )
125126- [ Hooks] ( https://williamfalcon.github.io/pytorch-lightning/Trainer/hooks/ )
You can’t perform that action at this time.
0 commit comments