STEM: A Stochastic Two-Sided Momentum Algorithm Achieving Near-Optimal Sample and Communication Complexities for Federated Learning

06/19/2021
by   Prashant Khanduri, et al.
5

Federated Learning (FL) refers to the paradigm where multiple worker nodes (WNs) build a joint model by using local data. Despite extensive research, for a generic non-convex FL problem, it is not clear, how to choose the WNs' and the server's update directions, the minibatch sizes, and the local update frequency, so that the WNs use the minimum number of samples and communication rounds to achieve the desired solution. This work addresses the above question and considers a class of stochastic algorithms where the WNs perform a few local updates before communication. We show that when both the WN's and the server's directions are chosen based on a stochastic momentum estimator, the algorithm requires 𝒪̃(ϵ^-3/2) samples and 𝒪̃(ϵ^-1) communication rounds to compute an ϵ-stationary solution. To the best of our knowledge, this is the first FL algorithm that achieves such near-optimal sample and communication complexities simultaneously. Further, we show that there is a trade-off curve between local update frequencies and local minibatch sizes, on which the above sample and communication complexities can be maintained. Finally, we show that for the classical FedAvg (a.k.a. Local SGD, which is a momentum-less special case of the STEM), a similar trade-off curve exists, albeit with worse sample and communication complexities. Our insights on this trade-off provides guidelines for choosing the four important design elements for FL algorithms, the update frequency, directions, and minibatch sizes to achieve the best performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/02/2022

Faster Adaptive Federated Learning

Federated learning has attracted increasing attention with the emergence...
research
02/08/2021

Double Momentum SGD for Federated Learning

Communication efficiency is crucial in federated learning. Conducting ma...
research
10/08/2019

Accelerating Federated Learning via Momentum Gradient Descent

Federated learning (FL) provides a communication-efficient approach to s...
research
05/22/2020

FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity to Non-IID Data

Federated Learning (FL) has become a popular paradigm for learning from ...
research
06/02/2023

Federated Multi-Sequence Stochastic Approximation with Local Hypergradient Estimation

Stochastic approximation with multiple coupled sequences (MSA) has found...
research
04/28/2022

On the Convergence of Momentum-Based Algorithms for Federated Stochastic Bilevel Optimization Problems

In this paper, we studied the federated stochastic bilevel optimization ...
research
04/27/2022

FedShuffle: Recipes for Better Use of Local Work in Federated Learning

The practice of applying several local updates before aggregation across...

Please sign up or login with your details

Forgot password? Click here to reset