Skip to content

train: allow passing custom learning rate optimizer#305

Open
breznak wants to merge 1 commit intodbolya:masterfrom
breznak:learning_rate_pct
Open

train: allow passing custom learning rate optimizer#305
breznak wants to merge 1 commit intodbolya:masterfrom
breznak:learning_rate_pct

Conversation

@breznak
Copy link
Contributor

@breznak breznak commented Jan 26, 2020

  • allows setting customized optimizer train(optimizer=torch.optim.MyCustomOptimizer(..))
  • TODO add early-stopping
  • TODO ideally avoid extra steps for learning-rate management (leave it to the optimizer)

For #298

def train(optimizer=None):
"""
@param optimizer: set custom optimizer, default (None) uses
`torch.optim.SGD(net.parameters(), lr=args.lr, momentum=args.momentum, weight_decay=args.decay)`
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ideally, I'd set the optimizer=SGD(...) here already, but the "net" is not available


# Warm up by linearly interpolating the learning rate from some smaller value
# Warm up by linearly interpolating the learning rate from some smaller value
if cfg.lr_warmup_until > 0 and iteration <= cfg.lr_warmup_until:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we leave this fine tuning, lr management to the optimizer (or other code provided by the framework)?

@breznak breznak requested a review from dbolya January 26, 2020 20:37
@breznak
Copy link
Contributor Author

breznak commented Jan 26, 2020

Please feel free to take over this, I'll be out for the next week

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant