Fast Convergence for Stochastic and Distributed Gradient Descent in the Interpolation Limit

03/08/2018
by   Partha P. Mitra, et al.
0

Modern supervised learning techniques, particularly those using so called deep nets, involve fitting high dimensional labelled data sets with functions containing very large numbers of parameters. Much of this work is empirical, and interesting phenomena have been observed that require theoretical explanations, however the non-convexity of the loss functions complicates the analysis. Recently it has been proposed that some of the success of these techniques resides in the effectiveness of the simple stochastic gradient descent algorithm in the so called interpolation limit in which all labels are fit perfectly. This analysis is made possible since the SGD algorithm reduces to a stochastic linear system near the interpolating minimum of the loss function. Here we exploit this insight by analyzing a distributed algorithm for gradient descent, also in the interpolating limit. The algorithm corresponds to gradient descent applied to a simple penalized distributed loss function, L( w_1,..., w_n) = Σ_i l_i( w_i) + μ∑_<i,j>| w_i- w_j|^2. Here each node is allowed its own parameter vector and <i,j> denotes edges of a connected graph defining the communication links between nodes. It is shown that this distributed algorithm converges linearly (ie the error reduces exponentially with iteration number), with a rate 1-η/nλ_min(H)<R<1 where λ_min(H) is the smallest nonzero eigenvalue of the sample covariance or the Hessian H. In contrast with previous usage of similar penalty functions to enforce consensus between nodes, in the interpolating limit it is not required to take the penalty parameter to infinity for consensus to occur. The analysis further reinforces the utility of this limit in the theoretical treatment of modern machine learning algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/28/2019

Neural ODEs as the Deep Limit of ResNets with constant weights

In this paper we prove that, in the deep limit, the stochastic gradient ...
research
05/21/2012

Stochastic Smoothing for Nonsmooth Minimizations: Accelerating SGD by Exploiting Structure

In this work we consider the stochastic minimization of nonsmooth convex...
research
06/05/2018

Stochastic Gradient Descent on Separable Data: Exact Convergence with a Fixed Learning Rate

Stochastic Gradient Descent (SGD) is a central tool in machine learning....
research
04/16/2022

On Acceleration of Gradient-Based Empirical Risk Minimization using Local Polynomial Regression

We study the acceleration of the Local Polynomial Interpolation-based Gr...
research
11/04/2020

Gradient-Based Empirical Risk Minimization using Local Polynomial Regression

In this paper, we consider the problem of empirical risk minimization (E...
research
06/16/2023

Gradient is All You Need?

In this paper we provide a novel analytical perspective on the theoretic...
research
04/13/2017

Fully Distributed and Asynchronized Stochastic Gradient Descent for Networked Systems

This paper considers a general data-fitting problem over a networked sys...

Please sign up or login with your details

Forgot password? Click here to reset