A Geometric Approach of Gradient Descent Algorithms in Neural Networks

11/08/2018
by   Yacine Chitour, et al.
0

In this article we present a geometric framework to analyze convergence of gradient descent trajectories in the context of neural networks. In the case of linear networks of an arbitrary number of hidden layers, we characterize appropriate quantities which are conserved along the gradient descent system (GDS). We use them to prove boundedness of every trajectory of the GDS, which implies convergence to a critical point. We further focus on the local behavior in the neighborhood of each critical points and perform a study on the associated basin of attractions so as to measure the "possibility" of converging to saddle points and local minima.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset