Scaff-PD: Communication Efficient Fair and Robust Federated Learning

by   Yaodong Yu, et al.

We present Scaff-PD, a fast and communication-efficient algorithm for distributionally robust federated learning. Our approach improves fairness by optimizing a family of distributionally robust objectives tailored to heterogeneous clients. We leverage the special structure of these objectives, and design an accelerated primal dual (APD) algorithm which uses bias corrected local steps (as in Scaffold) to achieve significant gains in communication efficiency and convergence speed. We evaluate Scaff-PD on several benchmark datasets and demonstrate its effectiveness in improving fairness and robustness while maintaining competitive accuracy. Our results suggest that Scaff-PD is a promising approach for federated learning in resource-constrained and heterogeneous settings.


page 1

page 2

page 3

page 4


Fair Resource Allocation in Federated Learning

Federated learning involves training statistical models in massive, hete...

A Primal-Dual Algorithm for Hybrid Federated Learning

Very few methods for hybrid federated learning, where clients only hold ...

Federated Learning with Heterogeneous Data: A Superquantile Optimization Approach

We present a federated learning framework that is designed to robustly d...

Clustered Scheduling and Communication Pipelining For Efficient Resource Management Of Wireless Federated Learning

This paper proposes using communication pipelining to enhance the wirele...

Resilient Constrained Learning

When deploying machine learning solutions, they must satisfy multiple re...

Hierarchically Fair Federated Learning

Federated learning facilitates collaboration among self-interested agent...

Achieving Model Fairness in Vertical Federated Learning

Vertical federated learning (VFL), which enables multiple enterprises po...

Please sign up or login with your details

Forgot password? Click here to reset