Avoid Adversarial Adaption in Federated Learning by Multi-Metric Investigations

06/06/2023
by   Torsten Krauß, et al.
0

Federated Learning (FL) trains machine learning models on data distributed across multiple devices, avoiding data transfer to a central location. This improves privacy, reduces communication costs, and enhances model performance. However, FL is prone to poisoning attacks, which can be untargeted aiming to reduce the model performance, or targeted, so-called backdoors, which add adversarial behavior that can be triggered with appropriately crafted inputs. Striving for stealthiness, backdoor attacks are harder to deal with. Mitigation techniques against poisoning attacks rely on monitoring certain metrics and filtering malicious model updates. However, previous works didn't consider real-world adversaries and data distributions. To support our statement, we define a new notion of strong adaptive adversaries that can simultaneously adapt to multiple objectives and demonstrate through extensive tests, that existing defense methods can be circumvented in this adversary model. We also demonstrate, that existing defenses have limited effectiveness when no assumptions are made about underlying data distributions. To address realistic scenarios and adversary models, we propose Metric-Cascades (MESAS) a new defense that leverages multiple detection metrics simultaneously for the filtering of poisoned model updates. This approach forces adaptive attackers into a heavy multi-objective optimization problem, and our evaluation with nine backdoors and three datasets shows that even our strong adaptive attacker cannot evade MESAS's detection. We show that MESAS outperforms existing defenses in distinguishing backdoors from distortions originating from different data distributions within and across the clients. Overall, MESAS is the first defense that is robust against strong adaptive adversaries and is effective in real-world data scenarios while introducing a low overhead of 24.37s on average.

READ FULL TEXT
research
01/06/2021

FLGUARD: Secure and Private Federated Learning

Recently, a number of backdoor attacks against Federated Learning (FL) h...
research
04/28/2022

Shielding Federated Learning: Robust Aggregation with Adaptive Client Selection

Federated learning (FL) enables multiple clients to collaboratively trai...
research
07/01/2023

Fedward: Flexible Federated Backdoor Defense Framework with Non-IID Data

Federated learning (FL) enables multiple clients to collaboratively trai...
research
03/12/2023

Multi-metrics adaptively identifies backdoors in Federated learning

The decentralized and privacy-preserving nature of federated learning (F...
research
01/03/2022

DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection

Federated Learning (FL) allows multiple clients to collaboratively train...
research
08/23/2021

Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Federated Learning

While recent works have indicated that federated learning (FL) is vulner...
research
06/28/2022

How to Steer Your Adversary: Targeted and Efficient Model Stealing Defenses with Gradient Redirection

Model stealing attacks present a dilemma for public machine learning API...

Please sign up or login with your details

Forgot password? Click here to reset