No Spurious Local Minima in Deep Quadratic Networks

12/31/2019
by   Abbas Kazemipour, et al.
18

Despite their practical success, a theoretical understanding of the loss landscape of neural networks has proven challenging due to the high-dimensional, non-convex, and highly nonlinear structure of such models. In this paper, we characterize the training landscape of the quadratic loss landscape for neural networks with quadratic activation functions. We prove existence of spurious local minima and saddle points which can be escaped easily with probability one when the number of neurons is greater than or equal to the input dimension and the norm of the training samples is used as a regressor. We prove that deep overparameterized neural networks with quadratic activations benefit from similar nice landscape properties. Our theoretical results are independent of data distribution and fill the existing gap in theory for two-layer quadratic neural networks. Finally, we empirically demonstrate convergence to a global minimum for these problems.

READ FULL TEXT

page 6

page 7

page 8

page 21

page 29

research
02/10/2018

A Critical View of Global Optimality in Deep Learning

We investigate the loss surface of deep linear and nonlinear neural netw...
research
03/28/2017

Theory II: Landscape of the Empirical Risk in Deep Learning

Previous theoretical work on deep learning and neural network optimizati...
research
11/04/2016

Topology and Geometry of Half-Rectified Network Optimization

The loss surface of deep neural networks has recently attracted interest...
research
04/03/2023

Charting the Topography of the Neural Network Landscape with Thermal-Like Noise

The training of neural networks is a complex, high-dimensional, non-conv...
research
03/03/2018

On the Power of Over-parametrization in Neural Networks with Quadratic Activation

We provide new theoretical insights on why over-parametrization is effec...
research
09/01/2023

Structure and Gradient Dynamics Near Global Minima of Two-layer Neural Networks

Under mild assumptions, we investigate the structure of loss landscape o...
research
10/03/2019

Beyond Linearization: On Quadratic and Higher-Order Approximation of Wide Neural Networks

Recent theoretical work has established connections between over-paramet...

Please sign up or login with your details

Forgot password? Click here to reset