Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks

06/14/2018
by   Lechao Xiao, et al.
0

In recent years, state-of-the-art methods in computer vision have utilized increasingly deep convolutional neural network architectures (CNNs), with some of the most successful models employing hundreds or even thousands of layers. A variety of pathologies such as vanishing/exploding gradients make training such deep networks challenging. While residual connections and batch normalization do enable training at these depths, it has remained unclear whether such specialized architecture designs are truly necessary to train deep CNNs. In this work, we demonstrate that it is possible to train vanilla CNNs with ten thousand layers or more simply by using an appropriate initialization scheme. We derive this initialization scheme theoretically by developing a mean field theory for signal propagation and by characterizing the conditions for dynamical isometry, the equilibration of singular values of the input-output Jacobian matrix. These conditions require that the convolution operator be an orthogonal transformation in the sense that it is norm-preserving. We present an algorithm for generating such random initial orthogonal convolution kernels and demonstrate empirically that they enable efficient training of extremely deep architectures.

READ FULL TEXT
research
06/14/2018

Dynamical Isometry and a Mean Field Theory of RNNs: Gating Enables Signal Propagation in Recurrent Neural Networks

Recurrent neural networks have gained widespread use in modeling sequenc...
research
01/25/2019

Dynamical Isometry and a Mean Field Theory of LSTMs and GRUs

Training recurrent neural networks (RNNs) on long sequence tasks is plag...
research
02/21/2019

A Mean Field Theory of Batch Normalization

We develop a mean field theory for batch normalization in fully-connecte...
research
10/03/2022

Random orthogonal additive filters: a solution to the vanishing/exploding gradient of deep neural networks

Since the recognition in the early nineties of the vanishing/exploding (...
research
10/09/2018

Information Geometry of Orthogonal Initializations and Training

Recently mean field theory has been successfully used to analyze propert...
research
10/14/2022

Old can be Gold: Better Gradient Flow can Make Vanilla-GCNs Great Again

Despite the enormous success of Graph Convolutional Networks (GCNs) in m...
research
03/18/2023

ExplainFix: Explainable Spatially Fixed Deep Networks

Is there an initialization for deep networks that requires no learning? ...

Please sign up or login with your details

Forgot password? Click here to reset