Gradient Normalization & Depth Based Decay For Deep Learning

12/10/2017
by   Robert Kwiatkowski, et al.
0

In this paper we introduce a novel method of gradient normalization and decay with respect to depth. Our method leverages the simple concept of normalizing all gradients in a deep neural network, and then decaying said gradients with respect to their depth in the network. Our proposed normalization and decay techniques can be used in conjunction with most current state of the art optimizers and are a very simple addition to any network. This method, although simple, showed improvements in convergence time on state of the art networks such as DenseNet and ResNet on image classification tasks, as well as on an LSTM for natural language processing tasks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset