Random gradient extrapolation for distributed and stochastic optimization

11/15/2017
by   Guanghui Lan, et al.
0

In this paper, we consider a class of finite-sum convex optimization problems defined over a distributed multiagent network with m agents connected to a central server. In particular, the objective function consists of the average of m (> 1) smooth components associated with each network agent together with a strongly convex term. Our major contribution is to develop a new randomized incremental gradient algorithm, namely random gradient extrapolation method (RGEM), which does not require any exact gradient evaluation even for the initial point, but can achieve the optimal O((1/ϵ)) complexity bound in terms of the total number of gradient evaluations of component functions to solve the finite-sum problems. Furthermore, we demonstrate that for stochastic finite-sum optimization problems, RGEM maintains the optimal O(1/ϵ) complexity (up to a certain logarithmic factor) in terms of the number of stochastic gradient computations, but attains an O((1/ϵ)) complexity in terms of communication rounds (each round involves only one agent). It is worth noting that the former bound is independent of the number of agents m, while the latter one only linearly depends on m or even √(m) for ill-conditioned problems. To the best of our knowledge, this is the first time that these complexity bounds have been obtained for distributed and stochastic optimization problems. Moreover, our algorithms were developed based on a novel dual perspective of Nesterov's accelerated gradient method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/08/2015

An optimal randomized incremental gradient method

In this paper, we consider a class of finite-sum convex optimization pro...
research
06/21/2020

Optimal and Practical Algorithms for Smooth and Strongly Convex Decentralized Optimization

We consider the task of decentralized minimization of the sum of smooth ...
research
04/08/2018

Distributed Non-Convex First-Order Optimization and Information Processing: Lower Complexity Bounds and Rate Optimal Algorithms

We consider a class of distributed non-convex optimization problems ofte...
research
10/28/2022

Secure Distributed Optimization Under Gradient Attacks

In this paper, we study secure distributed optimization against arbitrar...
research
09/18/2020

Hybrid Stochastic-Deterministic Minibatch Proximal Gradient: Less-Than-Single-Pass Optimization with Nearly Optimal Generalization

Stochastic variance-reduced gradient (SVRG) algorithms have been shown t...
research
02/25/2018

An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization

We consider an unconstrained problem of minimization of a smooth convex ...
research
06/01/2019

Data-Pooling in Stochastic Optimization

Managing large-scale systems often involves simultaneously solving thous...

Please sign up or login with your details

Forgot password? Click here to reset