DeepAI AI Chat
Log In Sign Up

QLSD: Quantised Langevin stochastic dynamics for Bayesian federated learning

06/01/2021
by   Maxime Vono, et al.
0

Federated learning aims at conducting inference when data are decentralised and locally stored on several clients, under two main constraints: data ownership and communication overhead. In this paper, we address these issues under the Bayesian paradigm. To this end, we propose a novel Markov chain Monte Carlo algorithm coined built upon quantised versions of stochastic gradient Langevin dynamics. To improve performance in a big data regime, we introduce variance-reduced alternatives of our methodology referred to as ^⋆ and ^++. We provide both non-asymptotic and asymptotic convergence guarantees for the proposed algorithms and illustrate their benefits on several federated learning benchmarks.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/07/2022

FedPop: A Bayesian Approach for Personalised Federated Learning

Personalised federated learning (FL) aims at collaboratively learning a ...
05/03/2022

Efficient and Convergent Federated Learning

Federated learning has shown its advances over the last few years but is...
08/23/2022

Exact Penalty Method for Federated Learning

Federated learning has burgeoned recently in machine learning, giving ri...
11/29/2021

SPATL: Salient Parameter Aggregation and Transfer Learning for Heterogeneous Clients in Federated Learning

Efficient federated learning is one of the key challenges for training a...
12/09/2020

Accurate and Fast Federated Learning via IID and Communication-Aware Grouping

Federated learning has emerged as a new paradigm of collaborative machin...
03/08/2023

ELF: Federated Langevin Algorithms with Primal, Dual and Bidirectional Compression

Federated sampling algorithms have recently gained great popularity in t...
10/25/2022

Federated Bayesian Computation via Piecewise Deterministic Markov Processes

When performing Bayesian computations in practice, one is often faced wi...