How to decay your learning rate

03/23/2021
by   Aitor Lewkowycz, et al.
0

Complex learning rate schedules have become an integral part of deep learning. We find empirically that common fine-tuned schedules decay the learning rate after the weight norm bounces. This leads to the proposal of ABEL: an automatic scheduler which decays the learning rate by keeping track of the weight norm. ABEL's performance matches that of tuned schedules and is more robust with respect to its parameters. Through extensive experiments in vision, NLP, and RL, we show that if the weight norm does not bounce, we can simplify schedules even further with no loss in performance. In such cases, a complex schedule has similar performance to a constant learning rate with a decay at the end of training.

READ FULL TEXT
research
02/17/2021

Training Aware Sigmoidal Optimizer

Proper optimization of deep neural networks is an open research question...
research
10/16/2019

An Exponential Learning Rate Schedule for Deep Learning

Intriguing empirical evidence exists that deep learning can work well wi...
research
06/20/2022

When Does Re-initialization Work?

Re-initializing a neural network during training has been observed to im...
research
04/27/2019

Forget the Learning Rate, Decay Loss

In the usual deep neural network optimization process, the learning rate...
research
05/07/2021

Network Pruning That Matters: A Case Study on Retraining Variants

Network pruning is an effective method to reduce the computational expen...
research
03/29/2021

FixNorm: Dissecting Weight Decay for Training Deep Neural Networks

Weight decay is a widely used technique for training Deep Neural Network...
research
10/21/2022

Amos: An Adam-style Optimizer with Adaptive Weight Decay towards Model-Oriented Scale

We present Amos, a stochastic gradient-based optimizer designed for trai...

Please sign up or login with your details

Forgot password? Click here to reset