Federated Learning Attacks Revisited: A Critical Discussion of Gaps, Assumptions, and Evaluation Setups

by   Aidmar Wainakh, et al.

Federated learning (FL) enables a set of entities to collaboratively train a machine learning model without sharing their sensitive data, thus, mitigating some privacy concerns. However, an increasing number of works in the literature propose attacks that can manipulate the model and disclose information about the training data in FL. As a result, there has been a growing belief in the research community that FL is highly vulnerable to a variety of severe attacks. Although these attacks do indeed highlight security and privacy risks in FL, some of them may not be as effective in production deployment because they are feasible only under special – sometimes impractical – assumptions. Furthermore, some attacks are evaluated under limited setups that may not match real-world scenarios. In this paper, we investigate this issue by conducting a systematic mapping study of attacks against FL, covering 48 relevant papers from 2016 to the third quarter of 2021. On the basis of this study, we provide a quantitative analysis of the proposed attacks and their evaluation settings. This analysis reveals several research gaps with regard to the type of target ML models and their architectures. Additionally, we highlight unrealistic assumptions in the problem settings of some attacks, related to the hyper-parameters of the ML model and data distribution among clients. Furthermore, we identify and discuss several fallacies in the evaluation of attacks, which open up questions on the generalizability of the conclusions. As a remedy, we propose a set of recommendations to avoid these fallacies and to promote adequate evaluations.


Security and Privacy Issues of Federated Learning

Federated Learning (FL) has emerged as a promising approach to address d...

Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Federated Learning

While recent works have indicated that federated learning (FL) is vulner...

Uncovering Promises and Challenges of Federated Learning to Detect Cardiovascular Diseases: A Scoping Literature Review

Cardiovascular diseases (CVD) are the leading cause of death globally, a...

Privacy Preservation in Federated Learning: Insights from the GDPR Perspective

Along with the blooming of AI and Machine Learning-based applications an...

FLAIRS: FPGA-Accelerated Inference-Resistant Secure Federated Learning

Federated Learning (FL) has become very popular since it enables clients...

Vertical Federated Learning: Challenges, Methodologies and Experiments

Recently, federated learning (FL) has emerged as a promising distributed...

Fed-LSAE: Thwarting Poisoning Attacks against Federated Cyber Threat Detection System via Autoencoder-based Latent Space Inspection

The significant rise of security concerns in conventional centralized le...

Please sign up or login with your details

Forgot password? Click here to reset