Free Probability, Newton lilypads and Jacobians of neural networks

11/01/2021
by   Reda Chhaibi, et al.
9

Gradient descent during the learning process of a neural network can be subject to many instabilities. The spectral density of the Jacobian is a key component for analyzing robustness. Following the works of Pennington et al., such Jacobians are modeled using free multiplicative convolutions from Free Probability Theory. We present a reliable and very fast method for computing the associated spectral densities. This method has a controlled and proven convergence. Our technique is based on an adaptative Newton-Raphson scheme, by finding and chaining basins of attraction: the Newton algorithm finds contiguous lilypad-like basins and steps from one to the next, heading towards the objective. We demonstrate the applicability of our method by using it to assess how the learning process is affected by network depth, layer widths and initialization choices: empirically, final test losses are very correlated to our Free Probability metrics.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/09/2021

Newton's Method with GeoGebra

In this work, we present a program in the computational environment, Geo...
research
06/14/2022

SpecNet2: Orthogonalization-free spectral embedding by neural networks

Spectral methods which represent data points by eigenvectors of kernel m...
research
05/18/2023

Modified Gauss-Newton Algorithms under Noise

Gauss-Newton methods and their stochastic version have been widely used ...
research
06/19/2012

An Improved Gauss-Newtons Method based Back-propagation Algorithm for Fast Convergence

The present work deals with an improved back-propagation algorithm based...
research
01/30/2023

Robust empirical risk minimization via Newton's method

We study a variant of Newton's method for empirical risk minimization, w...
research
05/02/2023

Computing Free Convolutions via Contour Integrals

This work proposes algorithms for computing additive and multiplicative ...
research
12/29/2022

A fast and convergent combined Newton and gradient descent method for computing steady states of chemical reaction networks

In this work we present a fast, globally convergent, iterative algorithm...

Please sign up or login with your details

Forgot password? Click here to reset