Gradient Descent on Neural Networks Typically Occurs at the Edge of Stability

02/26/2021
by   Jeremy M Cohen, et al.
0

We empirically demonstrate that full-batch gradient descent on neural network training objectives typically operates in a regime we call the Edge of Stability. In this regime, the maximum eigenvalue of the training loss Hessian hovers just above the numerical value 2 / (step size), and the training loss behaves non-monotonically over short timescales, yet consistently decreases over long timescales. Since this behavior is inconsistent with several widespread presumptions in the field of optimization, our findings raise questions as to whether these presumptions are relevant to neural network training. We hope that our findings will inspire future efforts aimed at rigorously understanding optimization at the Edge of Stability. Code is available at https://github.com/locuslab/edge-of-stability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/29/2022

Adaptive Gradient Methods at the Edge of Stability

Very little is known about the training dynamics of adaptive gradient me...
research
10/07/2022

Understanding Edge-of-Stability Training Dynamics with a Minimalist Example

Recently, researchers observed that gradient descent for deep neural net...
research
09/30/2022

Self-Stabilization: The Implicit Bias of Gradient Descent at the Edge of Stability

Traditional analyses of gradient descent show that when the largest eige...
research
10/10/2022

Second-order regression models exhibit progressive sharpening to the edge of stability

Recent studies of gradient descent with large step sizes have shown that...
research
07/26/2022

Analyzing Sharpness along GD Trajectory: Progressive Sharpening and Edge of Stability

Recent findings (e.g., arXiv:2103.00065) demonstrate that modern neural ...
research
07/09/2023

Investigating the Edge of Stability Phenomenon in Reinforcement Learning

Recent progress has been made in understanding optimisation dynamics in ...
research
05/22/2023

Gradient Descent Monotonically Decreases the Sharpness of Gradient Flow Solutions in Scalar Networks and Beyond

Recent research shows that when Gradient Descent (GD) is applied to neur...

Please sign up or login with your details

Forgot password? Click here to reset