Global Convergence Analysis of Deep Linear Networks with A One-neuron Layer

by   Kun Chen, et al.

In this paper, we follow Eftekhari's work to give a non-local convergence analysis of deep linear networks. Specifically, we consider optimizing deep linear networks which have a layer with one neuron under quadratic loss. We describe the convergent point of trajectories with arbitrary starting point under gradient flow, including the paths which converge to one of the saddle points or the original point. We also show specific convergence rates of trajectories that converge to the global minimizer by stages. To achieve these results, this paper mainly extends the machinery in Eftekhari's work to provably identify the rank-stable set and the global minimizer convergent set. We also give specific examples to show the necessity of our definitions. Crucially, as far as we know, our results appear to be the first to give a non-local global analysis of linear neural networks from arbitrary initialized points, rather than the lazy training regime which has dominated the literature of neural networks, and restricted benign initialization in Eftekhari's work. We also note that extending our results to general linear networks without one hidden neuron assumption remains a challenging open problem.


page 1

page 2

page 3

page 4


Deep linear neural networks with arbitrary loss: All local minima are global

We consider deep linear networks with arbitrary differentiable loss. We ...

Global convergence of neuron birth-death dynamics

Neural networks with a large number of parameters admit a mean-field des...

A Convergence Analysis of Nesterov's Accelerated Gradient Method in Training Deep Linear Neural Networks

Momentum methods, including heavy-ball (HB) and Nesterov's accelerated g...

Scaling Limit of Neural Networks with the Xavier Initialization and Convergence to a Global Minimum

We analyze single-layer neural networks with the Xavier initialization i...

Directional Convergence Analysis under Spherically Symmetric Distribution

We consider the fundamental problem of learning linear predictors (i.e.,...

A Note on the Global Convergence of Multilayer Neural Networks in the Mean Field Regime

In a recent work, we introduced a rigorous framework to describe the mea...

When Expressivity Meets Trainability: Fewer than n Neurons Can Work

Modern neural networks are often quite wide, causing large memory and co...

Please sign up or login with your details

Forgot password? Click here to reset