Composite federated learning with heterogeneous data

09/04/2023
by   Jiaojiao Zhang, et al.
0

We propose a novel algorithm for solving the composite Federated Learning (FL) problem. This algorithm manages non-smooth regularization by strategically decoupling the proximal operator and communication, and addresses client drift without any assumptions about data similarity. Moreover, each worker uses local updates to reduce the communication frequency with the server and transmits only a d-dimensional vector per communication round. We prove that our algorithm converges linearly to a neighborhood of the optimal solution and demonstrate the superiority of our algorithm over state-of-the-art methods in numerical experiments.

READ FULL TEXT
research
03/28/2022

FedADMM: A Federated Primal-Dual Algorithm Allowing Partial Participation

Federated learning is a framework for distributed optimization that plac...
research
07/28/2022

FedVARP: Tackling the Variance Due to Partial Client Participation in Federated Learning

Data-heterogeneous federated learning (FL) systems suffer from two signi...
research
10/06/2022

Communication-Efficient and Drift-Robust Federated Learning via Elastic Net

Federated learning (FL) is a distributed method to train a global model ...
research
11/17/2020

Federated Composite Optimization

Federated Learning (FL) is a distributed learning paradigm which scales ...
research
07/12/2023

Locally Adaptive Federated Learning via Stochastic Polyak Stepsizes

State-of-the-art federated learning algorithms such as FedAvg require ca...
research
07/17/2022

Fast Composite Optimization and Statistical Recovery in Federated Learning

As a prevalent distributed learning paradigm, Federated Learning (FL) tr...
research
05/25/2023

Federated Composite Saddle Point Optimization

Federated learning (FL) approaches for saddle point problems (SPP) have ...

Please sign up or login with your details

Forgot password? Click here to reset