Performance Weighting for Robust Federated Learning Against Corrupted Sources

05/02/2022
by   Dimitris Stripelis, et al.
12

Federated Learning has emerged as a dominant computational paradigm for distributed machine learning. Its unique data privacy properties allow us to collaboratively train models while offering participating clients certain privacy-preserving guarantees. However, in real-world applications, a federated environment may consist of a mixture of benevolent and malicious clients, with the latter aiming to corrupt and degrade federated model's performance. Different corruption schemes may be applied such as model poisoning and data corruption. Here, we focus on the latter, the susceptibility of federated learning to various data corruption attacks. We show that the standard global aggregation scheme of local weights is inefficient in the presence of corrupted clients. To mitigate this problem, we propose a class of task-oriented performance-based methods computed over a distributed validation dataset with the goal to detect and mitigate corrupted clients. Specifically, we construct a robust weight aggregation scheme based on geometric mean and demonstrate its effectiveness under random label shuffling and targeted label flipping attacks.

READ FULL TEXT

page 14

page 15

page 18

page 23

page 24

page 27

research
07/18/2021

RobustFed: A Truth Inference Approach for Robust Federated Learning

Federated learning is a prominent framework that enables clients (e.g., ...
research
01/14/2021

Auto-weighted Robust Federated Learning with Corrupted Data Sources

Federated learning provides a communication-efficient and privacy-preser...
research
04/14/2021

Towards Causal Federated Learning For Enhanced Robustness and Privacy

Federated Learning is an emerging privacy-preserving distributed machine...
research
02/21/2023

CADIS: Handling Cluster-skewed Non-IID Data in Federated Learning with Clustered Aggregation and Knowledge DIStilled Regularization

Federated learning enables edge devices to train a global model collabor...
research
12/05/2022

FedCC: Robust Federated Learning against Model Poisoning Attacks

Federated Learning has emerged to cope with raising concerns about priva...
research
08/08/2023

Pelta: Shielding Transformers to Mitigate Evasion Attacks in Federated Learning

The main premise of federated learning is that machine learning model up...
research
04/27/2023

Attacks on Robust Distributed Learning Schemes via Sensitivity Curve Maximization

Distributed learning paradigms, such as federated or decentralized learn...

Please sign up or login with your details

Forgot password? Click here to reset