Adding One Neuron Can Eliminate All Bad Local Minima

05/22/2018
by   Shiyu Liang, et al.
0

One of the main difficulties in analyzing neural networks is the non-convexity of the loss function which may have many bad local minima. In this paper, we study the landscape of neural networks for binary classification tasks. Under mild assumptions, we prove that after adding one special neuron with a skip connection to the output, or one special neuron per layer, every local minimum is a global minimum.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/02/2019

Elimination of All Bad Local Minima in Deep Learning

In this paper, we theoretically prove that we can eliminate all suboptim...
research
06/10/2020

Is the Skip Connection Provable to Reform the Neural Network Loss Landscape?

The residual network is now one of the most effective structures in deep...
research
01/12/2019

Eliminating all bad Local Minima from Loss Landscapes without even adding an Extra Unit

Recent work has noted that all bad local minima can be removed from neur...
research
09/16/2020

Landscape of Sparse Linear Network: A Brief Investigation

Network pruning, or sparse network has a long history and practical sign...
research
05/31/2023

Mildly Overparameterized ReLU Networks Have a Favorable Loss Landscape

We study the loss landscape of two-layer mildly overparameterized ReLU n...
research
06/01/2020

The Effects of Mild Over-parameterization on the Optimization Landscape of Shallow ReLU Neural Networks

We study the effects of mild over-parameterization on the optimization l...
research
12/31/2019

Revisiting Landscape Analysis in Deep Neural Networks: Eliminating Decreasing Paths to Infinity

Traditional landscape analysis of deep neural networks aims to show that...

Please sign up or login with your details

Forgot password? Click here to reset