Federated Learning Attacks and Defenses: A Survey

by   Yao Chen, et al.

In terms of artificial intelligence, there are several security and privacy deficiencies in the traditional centralized training methods of machine learning models by a server. To address this limitation, federated learning (FL) has been proposed and is known for breaking down “data silos" and protecting the privacy of users. However, FL has not yet gained popularity in the industry, mainly due to its security, privacy, and high cost of communication. For the purpose of advancing the research in this field, building a robust FL system, and realizing the wide application of FL, this paper sorts out the possible attacks and corresponding defenses of the current FL system systematically. Firstly, this paper briefly introduces the basic workflow of FL and related knowledge of attacks and defenses. It reviews a great deal of research about privacy theft and malicious attacks that have been studied in recent years. Most importantly, in view of the current three classification criteria, namely the three stages of machine learning, the three different roles in federated learning, and the CIA (Confidentiality, Integrity, and Availability) guidelines on privacy protection, we divide attack approaches into two categories according to the training stage and the prediction stage in machine learning. Furthermore, we also identify the CIA property violated for each attack method and potential attack role. Various defense mechanisms are then analyzed separately from the level of privacy and security. Finally, we summarize the possible challenges in the application of FL from the aspect of attacks and defenses and discuss the future development direction of FL systems. In this way, the designed FL system has the ability to resist different attacks and is more secure and stable.


Backdoor Attacks and Defenses in Federated Learning: Survey, Challenges and Future Research Directions

Federated learning (FL) is a machine learning (ML) approach that allows ...

SoK: On the Security Privacy in Federated Learning

Advances in Machine Learning (ML) and its wide range of applications boo...

FedMLSecurity: A Benchmark for Attacks and Defenses in Federated Learning and LLMs

This paper introduces FedMLSecurity, a benchmark that simulates adversar...

Eavesdrop the Composition Proportion of Training Labels in Federated Learning

Federated learning (FL) has recently emerged as a new form of collaborat...

FedDef: Robust Federated Learning-based Network Intrusion Detection Systems Against Gradient Leakage

Deep learning methods have been widely applied to anomaly-based network ...

Advancements in Federated Learning: Models, Methods, and Privacy

Federated learning (FL) is a promising technique for addressing the risi...

Learning to Invert: Simple Adaptive Attacks for Gradient Inversion in Federated Learning

Gradient inversion attack enables recovery of training samples from mode...

Please sign up or login with your details

Forgot password? Click here to reset