Exact Penalty Method for Federated Learning

08/23/2022
by   Shenglong Zhou, et al.
0

Federated learning has burgeoned recently in machine learning, giving rise to a variety of research topics. Popular optimization algorithms are based on the frameworks of the (stochastic) gradient descent methods or the alternating direction method of multipliers. In this paper, we deploy an exact penalty method to deal with federated learning and propose an algorithm, FedEPM, that enables to tackle four critical issues in federated learning: communication efficiency, computational complexity, stragglers' effect, and data privacy. Moreover, it is proven to be convergent and testified to have high numerical performance.

READ FULL TEXT
research
05/03/2022

Efficient and Convergent Federated Learning

Federated learning has shown its advances over the last few years but is...
research
10/17/2019

Overcoming Forgetting in Federated Learning on Non-IID Data

We tackle the problem of Federated Learning in the non i.i.d. case, in w...
research
02/01/2022

Personalized Federated Learning via Convex Clustering

We propose a parametric family of algorithms for personalized federated ...
research
10/28/2021

Communication-Efficient ADMM-based Federated Learning

Federated learning has shown its advances over the last few years but is...
research
05/24/2023

Stochastic Unrolled Federated Learning

Algorithm unrolling has emerged as a learning-based optimization paradig...
research
06/01/2021

QLSD: Quantised Langevin stochastic dynamics for Bayesian federated learning

Federated learning aims at conducting inference when data are decentrali...
research
11/22/2021

FLIX: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning

Federated Learning (FL) is an increasingly popular machine learning para...

Please sign up or login with your details

Forgot password? Click here to reset