An Information-Theoretic View for Deep Learning

04/24/2018
by   Jingwei Zhang, et al.
0

Deep learning has transformed the computer vision, natural language processing and speech recognition. However, the following two critical questions are remaining obscure: (1) why deep neural networks generalize better than shallow networks? (2) Does it always hold that a deeper network leads to better performance? Specifically, letting L be the number of convolutional and pooling layers in a deep neural network, and n be the size of the training sample, we derive the upper bound on the expected generalization error for this network, i.e., E[R(W)-R_S(W)] ≤(-L/21/η)√(2σ^2/nI(S,W) ) where σ >0 is a constant depending on the loss function, 0<η<1 is a constant depending on the information loss for each convolutional or pooling layer, and I(S, W) is the mutual information between the training sample S and the output hypothesis W. This upper bound discovers: (1) As the network increases its number of convolutional and pooling layers L, the expected generalization error will decrease exponentially to zero. Layers with strict information loss, such as the convolutional layers, reduce the generalization error of deep learning algorithms. This answers the first question. However, (2) algorithms with zero expected generalization error does not imply a small test error or E[R(W)]. This is because E[R_S(W)] will be large when the information for fitting the data is lost as the number of layers increases. This suggests that the claim "the deeper the better" is conditioned on a small training error or E[R_S(W)].

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/03/2018

Generalization Error in Deep Learning

Deep learning models have lately shown great performance in various fiel...
research
11/08/2018

An Optimal Transport View on Generalization

We derive upper bounds on the generalization error of learning algorithm...
research
05/27/2019

Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness

The accuracy of deep learning, i.e., deep neural networks, can be charac...
research
01/31/2018

Deep Learning Works in Practice. But Does it Work in Theory?

Deep learning relies on a very specific kind of neural networks: those s...
research
05/13/2018

Doing the impossible: Why neural networks can be trained at all

As deep neural networks grow in size, from thousands to millions to bill...
research
03/29/2023

Lipschitzness Effect of a Loss Function on Generalization Performance of Deep Neural Networks Trained by Adam and AdamW Optimizers

The generalization performance of deep neural networks with regard to th...
research
04/08/2015

A Group Theoretic Perspective on Unsupervised Deep Learning

Why does Deep Learning work? What representations does it capture? How d...

Please sign up or login with your details

Forgot password? Click here to reset