Federated Averaging Langevin Dynamics: Toward a unified theory and new algorithms

10/31/2022
by   Vincent Plassier, et al.
0

This paper focuses on Bayesian inference in a federated learning context (FL). While several distributed MCMC algorithms have been proposed, few consider the specific limitations of FL such as communication bottlenecks and statistical heterogeneity. Recently, Federated Averaging Langevin Dynamics (FALD) was introduced, which extends the Federated Averaging algorithm to Bayesian inference. We obtain a novel tight non-asymptotic upper bound on the Wasserstein distance to the global posterior for FALD. This bound highlights the effects of statistical heterogeneity, which causes a drift in the local updates that negatively impacts convergence. We propose a new algorithm VR-FALD* that uses control variates to correct the client drift. We establish non-asymptotic bounds showing that VR-FALD* is not affected by statistical heterogeneity. Finally, we illustrate our results on several FL benchmarks for Bayesian inference.

READ FULL TEXT
research
06/23/2021

Behavior Mimics Distribution: Combining Individual and Group Behaviors for Federated Learning

Federated Learning (FL) has become an active and promising distributed m...
research
10/06/2022

Communication-Efficient and Drift-Robust Federated Learning via Elastic Net

Federated learning (FL) is a distributed method to train a global model ...
research
11/17/2020

Federated Composite Optimization

Federated Learning (FL) is a distributed learning paradigm which scales ...
research
10/07/2022

Depersonalized Federated Learning: Tackling Statistical Heterogeneity by Alternating Stochastic Gradient Descent

Federated learning (FL) has gained increasing attention recently, which ...
research
10/11/2020

Federated Learning via Posterior Averaging: A New Perspective and Practical Algorithms

Federated learning is typically approached as an optimization problem, w...
research
12/21/2021

Tackling System and Statistical Heterogeneity for Federated Learning with Adaptive Client Sampling

Federated learning (FL) algorithms usually sample a fraction of clients ...
research
07/17/2022

Fast Composite Optimization and Statistical Recovery in Federated Learning

As a prevalent distributed learning paradigm, Federated Learning (FL) tr...

Please sign up or login with your details

Forgot password? Click here to reset