SUCAG: Stochastic Unbiased Curvature-aided Gradient Method for Distributed Optimization

03/22/2018
by   Hoi-To Wai, et al.
0

We propose and analyze a new stochastic gradient method, which we call Stochastic Unbiased Curvature-aided Gra- dient (SUCAG), for finite sum optimization problems. SUCAG constitutes an unbiased total gradient tracking technique that uses Hessian information to accelerate convergence. We an- alyze our method under the general asynchronous model of computation, in which functions are selected infinitely often, but with delays that can grow sublinearly. For strongly convex problems, we establish linear convergence for the SUCAG method. When the initialization point is sufficiently close to the optimal solution, the established convergence rate is only dependent on the condition number of the problem, making it strictly faster than the known rate for the SAGA method. Furthermore, we describe a Markov-driven approach of implementing the SUCAG method in a distributed asynchronous multi-agent setting, via gossiping along a random walk on the communication graph. We show that our analysis applies as long as the undirected graph is connected and, notably, establishes an asymptotic linear convergence rate that is robust to the graph topology. Numerical results demonstrate the merit of our algorithm over existing methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/24/2017

Curvature-aided Incremental Aggregated Gradient Method

We propose a new algorithm for finite sum optimization which we call the...
research
09/10/2013

Minimizing Finite Sums with the Stochastic Average Gradient

We propose the stochastic average gradient (SAG) method for optimizing t...
research
05/31/2018

On Curvature-aided Incremental Aggregated Gradient Methods

This paper studies an acceleration technique for incremental aggregated ...
research
10/24/2019

Katyusha Acceleration for Convex Finite-Sum Compositional Optimization

Structured problems arise in many applications. To solve these problems,...
research
01/21/2022

High-Dimensional Inference over Networks: Linear Convergence and Statistical Guarantees

We study sparse linear regression over a network of agents, modeled as a...
research
01/31/2020

Convergence rate analysis and improved iterations for numerical radius computation

We analyze existing methods for computing the numerical radius and intro...
research
01/31/2022

A framework for bilevel optimization that enables stochastic and global variance reduction algorithms

Bilevel optimization, the problem of minimizing a value function which i...

Please sign up or login with your details

Forgot password? Click here to reset