Blind leads Blind: A Zero-Knowledge Attack on Federated Learning

by   Jiyue Huang, et al.

Attacks on Federated Learning (FL) can severely reduce the quality of the generated models and limit the usefulness of this emerging learning paradigm that enables on-premise decentralized learning. There have been various untargeted attacks on FL, but they are not widely applicable as they i) assume that the attacker knows every update of benign clients, which is indeed sent in encrypted form to the central server, or ii) assume that the attacker has a large dataset and sufficient resources to locally train updates imitating benign parties. In this paper, we design a zero-knowledge untargeted attack (ZKA), which synthesizes malicious data to craft adversarial models without eavesdropping on the transmission of benign clients at all or requiring a large quantity of task-specific training data. To inject malicious input into the FL system by synthetic data, ZKA has two variants. ZKA-R generates adversarial ambiguous data by reversing engineering from the global models. To enable stealthiness, ZKA-G trains the local model on synthetic data from the generator that aims to synthesize images different from a randomly chosen class. Furthermore, we add a novel distance-based regularization term for both attacks to further enhance stealthiness. Experimental results on Fashion-MNIST and CIFAR-10 show that the ZKA achieves similar or even higher attack success rate than the state-of-the-art untargeted attacks against various defense mechanisms, namely more than 50 mechanisms. As expected, ZKA-G is better at circumventing defenses, even showing a defense pass rate of close to 90 Higher data heterogeneity favours ZKA-R since detection becomes harder.


FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning

Federated Learning (FL) is a distributed learning paradigm that enables ...

Backdoor Attack and Defense in Federated Generative Adversarial Network-based Medical Image Synthesis

Deep Learning-based image synthesis techniques have been applied in heal...

Learning to Backdoor Federated Learning

In a federated learning (FL) system, malicious participants can easily e...

FedGrad: Mitigating Backdoor Attacks in Federated Learning Through Local Ultimate Gradients Inspection

Federated learning (FL) enables multiple clients to train a model withou...

Covert Model Poisoning Against Federated Learning: Algorithm Design and Optimization

Federated learning (FL), as a type of distributed machine learning frame...

Technical Report: Assisting Backdoor Federated Learning with Whole Population Knowledge Alignment

Due to the distributed nature of Federated Learning (FL), researchers ha...

Towards a Defense against Backdoor Attacks in Continual Federated Learning

Backdoor attacks are a major concern in federated learning (FL) pipeline...

Please sign up or login with your details

Forgot password? Click here to reset