Practical Quasi-Newton Methods for Training Deep Neural Networks

06/16/2020
by   Donald Goldfarb, et al.
0

We consider the development of practical stochastic quasi-Newton, and in particular Kronecker-factored block-diagonal BFGS and L-BFGS methods, for training deep neural networks (DNNs). In DNN training, the number of variables and components of the gradient n is often of the order of tens of millions and the Hessian has n^2 elements. Consequently, computing and storing a full n × n BFGS approximation or storing a modest number of (step, change in gradient) vector pairs for use in an L-BFGS implementation is out of the question. In our proposed methods, we approximate the Hessian by a block-diagonal matrix and use the structure of the gradient and Hessian to further approximate these blocks, each of which corresponds to a layer, as the Kronecker product of two much smaller matrices. This is analogous to the approach in KFAC, which computes a Kronecker-factored block-diagonal approximation to the Fisher matrix in a stochastic natural gradient method. Because the indefinite and highly variable nature of the Hessian in a DNN, we also propose a new damping approach to keep the upper as well as the lower bounds of the BFGS and L-BFGS approximations bounded. In tests on autoencoder feed-forward neural network models with either nine or thirteen layers applied to three datasets, our methods outperformed or performed comparably to KFAC and state-of-the-art first-order stochastic methods.

READ FULL TEXT
research
02/08/2022

A Mini-Block Natural Gradient Method for Deep Neural Networks

The training of deep neural networks (DNNs) is currently predominantly d...
research
03/19/2015

Optimizing Neural Networks with Kronecker-factored Approximate Curvature

We propose an efficient method for approximating natural gradient descen...
research
02/25/2016

Practical Riemannian Neural Networks

We provide the first experimental results on non-synthetic datasets for ...
research
10/27/2019

Fast Evaluation and Approximation of the Gauss-Newton Hessian Matrix for the Multilayer Perceptron

We introduce a fast algorithm for entry-wise evaluation of the Gauss-New...
research
06/05/2019

Efficient Subsampled Gauss-Newton and Natural Gradient Methods for Training Neural Networks

We present practical Levenberg-Marquardt variants of Gauss-Newton and na...
research
02/01/2018

Distributed Newton Methods for Deep Neural Networks

Deep learning involves a difficult non-convex optimization problem with ...
research
04/23/2021

Approximating the diagonal of a Hessian: which sample set of points should be used

An explicit formula to approximate the diagonal entries of the Hessian i...

Please sign up or login with your details

Forgot password? Click here to reset