ELF: Federated Langevin Algorithms with Primal, Dual and Bidirectional Compression

by   Avetik Karagulyan, et al.

Federated sampling algorithms have recently gained great popularity in the community of machine learning and statistics. This paper studies variants of such algorithms called Error Feedback Langevin algorithms (ELF). In particular, we analyze the combinations of EF21 and EF21-P with the federated Langevin Monte-Carlo. We propose three algorithms: P-ELF, D-ELF, and B-ELF that use, respectively, primal, dual, and bidirectional compressors. We analyze the proposed methods under Log-Sobolev inequality and provide non-asymptotic convergence guarantees.


page 1

page 2

page 3

page 4


A dual approach for federated learning

We study the federated optimization problem from a dual perspective and ...

Federated Composite Saddle Point Optimization

Federated learning (FL) approaches for saddle point problems (SPP) have ...

Stochastic Primal Dual Coordinate Method with Non-Uniform Sampling Based on Optimality Violations

We study primal-dual type stochastic optimization algorithms with non-un...

Artemis: tight convergence guarantees for bidirectional compression in Federated Learning

We introduce a new algorithm - Artemis - tackling the problem of learnin...

Federated Learning From Big Data Over Networks

This paper formulates and studies a novel algorithm for federated learni...

QLSD: Quantised Langevin stochastic dynamics for Bayesian federated learning

Federated learning aims at conducting inference when data are decentrali...

Please sign up or login with your details

Forgot password? Click here to reset