Auto-weighted Robust Federated Learning with Corrupted Data Sources

by   Shenghui Li, et al.

Federated learning provides a communication-efficient and privacy-preserving training process by enabling learning statistical models with massive participants while keeping their data in local clients. However, standard federated learning techniques that naively minimize an average loss function are vulnerable to data corruptions from outliers, systematic mislabeling, or even adversaries. In addition, it is often prohibited for service providers to verify the quality of data samples due to the increasing concern of user data privacy. In this paper, we address this challenge by proposing Auto-weighted Robust Federated Learning (arfl), a novel approach that jointly learns the global model and the weights of local updates to provide robustness against corrupted data sources. We prove a learning bound on the expected risk with respect to the predictor and the weights of clients, which guides the definition of the objective for robust federated learning. The weights are allocated by comparing the empirical loss of a client with the average loss of the best p clients (p-average), thus we can downweight the clients with significantly high losses, thereby lower their contributions to the global model. We show that this approach achieves robustness when the data of corrupted clients is distributed differently from benign ones. To optimize the objective function, we propose a communication-efficient algorithm based on the blockwise minimization paradigm. We conduct experiments on multiple benchmark datasets, including CIFAR-10, FEMNIST and Shakespeare, considering different deep neural network models. The results show that our solution is robust against different scenarios including label shuffling, label flipping and noisy features, and outperforms the state-of-the-art methods in most scenarios.


page 1

page 7


Performance Weighting for Robust Federated Learning Against Corrupted Sources

Federated Learning has emerged as a dominant computational paradigm for ...

Federated Learning Based Multilingual Emoji Prediction In Clean and Attack Scenarios

Federated learning is a growing field in the machine learning community ...

On Bridging Generic and Personalized Federated Learning

Federated learning is promising for its ability to collaboratively train...

SPATL: Salient Parameter Aggregation and Transfer Learning for Heterogeneous Clients in Federated Learning

Efficient federated learning is one of the key challenges for training a...

Robust Federated Learning with Noisy Labels

Federated learning is a paradigm that enables local devices to jointly t...

WeiAvg: Federated Learning Model Aggregation Promoting Data Diversity

Federated learning provides a promising privacy-preserving way for utili...

FedAR+: A Federated Learning Approach to Appliance Recognition with Mislabeled Data in Residential Buildings

With the enhancement of people's living standards and rapid growth of co...

Please sign up or login with your details

Forgot password? Click here to reset