Distributed Stochastic Optimization under a General Variance Condition

01/30/2023
by   Kun Huang, et al.
0

Distributed stochastic optimization has drawn great attention recently due to its effectiveness in solving large-scale machine learning problems. However, despite that numerous algorithms have been proposed with empirical successes, their theoretical guarantees are restrictive and rely on certain boundedness conditions on the stochastic gradients, varying from uniform boundedness to the relaxed growth condition. In addition, how to characterize the data heterogeneity among the agents and its impacts on the algorithmic performance remains challenging. In light of such motivations, we revisit the classical FedAvg algorithm for solving the distributed stochastic optimization problem and establish the convergence results under only a mild variance condition on the stochastic gradients for smooth nonconvex objective functions. Almost sure convergence to a stationary point is also established under the condition. Moreover, we discuss a more informative measurement for data heterogeneity as well as its implications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/14/2022

Stability and Generalization of Stochastic Optimization with Nonconvex and Nonsmooth Problems

Stochastic optimization has found wide applications in minimizing object...
research
02/17/2019

Quantized Frank-Wolfe: Communication-Efficient Distributed Optimization

How can we efficiently mitigate the overhead of gradient communications ...
research
05/29/2020

CoolMomentum: A Method for Stochastic Optimization by Langevin Dynamics with Simulated Annealing

Deep learning applications require optimization of nonconvex objective f...
research
05/31/2020

Momentum-based variance-reduced proximal stochastic gradient method for composite nonconvex stochastic optimization

Stochastic gradient methods (SGMs) have been extensively used for solvin...
research
05/09/2016

Nonconvex Sparse Learning via Stochastic Optimization with Progressive Variance Reduction

We propose a stochastic variance reduced optimization algorithm for solv...
research
06/11/2018

Swarming for Faster Convergence in Stochastic Optimization

We study a distributed framework for stochastic optimization which is in...
research
10/11/2022

Divergence Results and Convergence of a Variance Reduced Version of ADAM

Stochastic optimization algorithms using exponential moving averages of ...

Please sign up or login with your details

Forgot password? Click here to reset