Efficient and Convergent Federated Learning

05/03/2022
by   Shenglong Zhou, et al.
0

Federated learning has shown its advances over the last few years but is facing many challenges, such as how algorithms save communication resources, how they reduce computational costs, and whether they converge. To address these issues, this paper proposes a new federated learning algorithm (FedGiA) that combines the gradient descent and the inexact alternating direction method of multipliers. It is shown that FedGiA is computation and communication-efficient and convergent linearly under mild conditions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/28/2021

Communication-Efficient ADMM-based Federated Learning

Federated learning has shown its advances over the last few years but is...
research
04/22/2022

Federated Learning via Inexact ADMM

One of the crucial issues in federated learning is how to develop effici...
research
08/23/2022

Exact Penalty Method for Federated Learning

Federated learning has burgeoned recently in machine learning, giving ri...
research
05/24/2023

Stochastic Unrolled Federated Learning

Algorithm unrolling has emerged as a learning-based optimization paradig...
research
08/15/2022

Federated Quantum Natural Gradient Descent for Quantum Federated Learning

The heart of Quantum Federated Learning (QFL) is associated with a distr...
research
06/01/2021

QLSD: Quantised Langevin stochastic dynamics for Bayesian federated learning

Federated learning aims at conducting inference when data are decentrali...
research
08/31/2023

Communication-Efficient Decentralized Federated Learning via One-Bit Compressive Sensing

Decentralized federated learning (DFL) has gained popularity due to its ...

Please sign up or login with your details

Forgot password? Click here to reset