DeepAI AI Chat
Log In Sign Up

QLSD: Quantised Langevin stochastic dynamics for Bayesian federated learning

by   Maxime Vono, et al.

Federated learning aims at conducting inference when data are decentralised and locally stored on several clients, under two main constraints: data ownership and communication overhead. In this paper, we address these issues under the Bayesian paradigm. To this end, we propose a novel Markov chain Monte Carlo algorithm coined built upon quantised versions of stochastic gradient Langevin dynamics. To improve performance in a big data regime, we introduce variance-reduced alternatives of our methodology referred to as ^⋆ and ^++. We provide both non-asymptotic and asymptotic convergence guarantees for the proposed algorithms and illustrate their benefits on several federated learning benchmarks.


page 1

page 2

page 3

page 4


FedPop: A Bayesian Approach for Personalised Federated Learning

Personalised federated learning (FL) aims at collaboratively learning a ...

Efficient and Convergent Federated Learning

Federated learning has shown its advances over the last few years but is...

Exact Penalty Method for Federated Learning

Federated learning has burgeoned recently in machine learning, giving ri...

SPATL: Salient Parameter Aggregation and Transfer Learning for Heterogeneous Clients in Federated Learning

Efficient federated learning is one of the key challenges for training a...

Accurate and Fast Federated Learning via IID and Communication-Aware Grouping

Federated learning has emerged as a new paradigm of collaborative machin...

ELF: Federated Langevin Algorithms with Primal, Dual and Bidirectional Compression

Federated sampling algorithms have recently gained great popularity in t...

Federated Bayesian Computation via Piecewise Deterministic Markov Processes

When performing Bayesian computations in practice, one is often faced wi...