The Principles of Deep Learning Theory

06/18/2021
by   Daniel A. Roberts, et al.
0

This book develops an effective theory approach to understanding deep neural networks of practical relevance. Beginning from a first-principles component-level picture of networks, we explain how to determine an accurate description of the output of trained networks by solving layer-to-layer iteration equations and nonlinear learning dynamics. A main result is that the predictions of networks are described by nearly-Gaussian distributions, with the depth-to-width aspect ratio of the network controlling the deviations from the infinite-width Gaussian description. We explain how these effectively-deep networks learn nontrivial representations from training and more broadly analyze the mechanism of representation learning for nonlinear models. From a nearly-kernel-methods perspective, we find that the dependence of such models' predictions on the underlying learning algorithm can be expressed in a simple and universal way. To obtain these results, we develop the notion of representation group flow (RG flow) to characterize the propagation of signals through the network. By tuning networks to criticality, we give a practical solution to the exploding and vanishing gradient problem. We further explain how RG flow leads to near-universal behavior and lets us categorize networks built from different activation functions into universality classes. Altogether, we show that the depth-to-width ratio governs the effective model complexity of the ensemble of trained networks. By using information-theoretic techniques, we estimate the optimal aspect ratio at which we expect the network to be practically most useful and show how residual connections can be used to push this scale to arbitrary depths. With these tools, we can learn in detail about the inductive bias of architectures, hyperparameters, and optimizers.

READ FULL TEXT
research
02/10/2022

Information Flow in Deep Neural Networks

Although deep neural networks have been immensely successful, there is n...
research
09/23/2021

Arbitrary-Depth Universal Approximation Theorems for Operator Neural Networks

The standard Universal Approximation Theorem for operator neural network...
research
02/01/2023

Width and Depth Limits Commute in Residual Networks

We show that taking the width and depth to infinity in a deep neural net...
research
08/06/2017

Training of Deep Neural Networks based on Distance Measures using RMSProp

The vanishing gradient problem was a major obstacle for the success of d...
research
10/29/2020

Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth

A key factor in the success of deep neural networks is the ability to sc...
research
07/03/2023

Neural Hilbert Ladders: Multi-Layer Neural Networks in Function Space

The characterization of the functions spaces explored by neural networks...
research
10/31/2022

Globally Gated Deep Linear Networks

Recently proposed Gated Linear Networks present a tractable nonlinear ne...

Please sign up or login with your details

Forgot password? Click here to reset