DISBELIEVE: Distance Between Client Models is Very Essential for Effective Local Model Poisoning Attacks

08/14/2023
by   Indu Joshi, et al.
0

Federated learning is a promising direction to tackle the privacy issues related to sharing patients' sensitive data. Often, federated systems in the medical image analysis domain assume that the participating local clients are honest. Several studies report mechanisms through which a set of malicious clients can be introduced that can poison the federated setup, hampering the performance of the global model. To overcome this, robust aggregation methods have been proposed that defend against those attacks. We observe that most of the state-of-the-art robust aggregation methods are heavily dependent on the distance between the parameters or gradients of malicious clients and benign clients, which makes them prone to local model poisoning attacks when the parameters or gradients of malicious and benign clients are close. Leveraging this, we introduce DISBELIEVE, a local model poisoning attack that creates malicious parameters or gradients such that their distance to benign clients' parameters or gradients is low respectively but at the same time their adverse effect on the global model's performance is high. Experiments on three publicly available medical image datasets demonstrate the efficacy of the proposed DISBELIEVE attack as it significantly lowers the performance of the state-of-the-art robust aggregation methods for medical image analysis. Furthermore, compared to state-of-the-art local model poisoning attacks, DISBELIEVE attack is also effective on natural images where we observe a severe drop in classification performance of the global model for multi-class classification on benchmark dataset CIFAR-10.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/21/2023

Secure Aggregation in Federated Learning is not Private: Leaking User Data at Large Scale through Model Modification

Security and privacy are important concerns in machine learning. End use...
research
07/15/2022

Suppressing Poisoning Attacks on Federated Learning for Medical Imaging

Collaboration among multiple data-owning entities (e.g., hospitals) can ...
research
03/20/2023

Recursive Euclidean Distance Based Robust Aggregation Technique For Federated Learning

Federated learning has gained popularity as a solution to data availabil...
research
08/14/2022

Long-Short History of Gradients is All You Need: Detecting Malicious and Unreliable Clients in Federated Learning

Federated learning offers a framework of training a machine learning mod...
research
01/18/2022

Model Transferring Attacks to Backdoor HyperNetwork in Personalized Federated Learning

This paper explores previously unknown backdoor risks in HyperNet-based ...
research
09/13/2021

SignGuard: Byzantine-robust Federated Learning through Collaborative Malicious Gradient Filtering

Gradient-based training in federated learning is known to be vulnerable ...
research
04/27/2023

Attacks on Robust Distributed Learning Schemes via Sensitivity Curve Maximization

Distributed learning paradigms, such as federated or decentralized learn...

Please sign up or login with your details

Forgot password? Click here to reset