Backdoor Federated Learning by Poisoning Backdoor-Critical Layers

by   Haomin Zhuang, et al.
Queen's University Belfast
University of Louisiana at Lafayette
Stony Brook University
Louisiana State University

Federated learning (FL) has been widely deployed to enable machine learning training on sensitive data across distributed devices. However, the decentralized learning paradigm and heterogeneity of FL further extend the attack surface for backdoor attacks. Existing FL attack and defense methodologies typically focus on the whole model. None of them recognizes the existence of backdoor-critical (BC) layers-a small subset of layers that dominate the model vulnerabilities. Attacking the BC layers achieves equivalent effects as attacking the whole model but at a far smaller chance of being detected by state-of-the-art (SOTA) defenses. This paper proposes a general in-situ approach that identifies and verifies BC layers from the perspective of attackers. Based on the identified BC layers, we carefully craft a new backdoor attack methodology that adaptively seeks a fundamental balance between attacking effects and stealthiness under various defense strategies. Extensive experiments show that our BC layer-aware backdoor attacks can successfully backdoor FL under seven SOTA defenses with only 10 outperform the latest backdoor attack methods.


page 13

page 16


Learning to Backdoor Federated Learning

In a federated learning (FL) system, malicious participants can easily e...

Defense against Privacy Leakage in Federated Learning

Federated Learning (FL) provides a promising distributed learning paradi...

Multi-metrics adaptively identifies backdoors in Federated learning

The decentralized and privacy-preserving nature of federated learning (F...

Defending Against Backdoors in Federated Learning with Robust Learning Rate

Federated Learning (FL) allows a set of agents to collaboratively train ...

FedMLSecurity: A Benchmark for Attacks and Defenses in Federated Learning and LLMs

This paper introduces FedMLSecurity, a benchmark that simulates adversar...

Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Federated Learning

While recent works have indicated that federated learning (FL) is vulner...

Defending against the Label-flipping Attack in Federated Learning

Federated learning (FL) provides autonomy and privacy by design to parti...

Please sign up or login with your details

Forgot password? Click here to reset