Semi-Targeted Model Poisoning Attack on Federated Learning via Backward Error Analysis

03/22/2022
by   Yuwei Sun, et al.
0

Model poisoning attacks on federated learning (FL) intrude in the entire system via compromising an edge model, resulting in malfunctioning of machine learning models. Such compromised models are tampered with to perform adversary-desired behaviors. In particular, we considered a semi-targeted situation where the source class is predetermined however the target class is not. The goal is to cause the global classifier to misclassify data of the source class. Though approaches such as label flipping have been adopted to inject poisoned parameters into FL, it has been shown that their performances are usually class-sensitive varying with different target classes applied. Typically, an attack can become less effective when shifting to a different target class. To overcome this challenge, we propose the Attacking Distance-aware Attack (ADA) to enhance a poisoning attack by finding the optimized target class in the feature space. Moreover, we studied a more challenging situation where an adversary had limited prior knowledge about a client's data. To tackle this problem, ADA deduces pair-wise distances between different classes in the latent feature space from shared model parameters based on the backward error analysis. We performed extensive empirical evaluations on ADA by varying the factor of attacking frequency in three different image classification tasks. As a result, ADA succeeded in increasing the attack performance by 1.8 times in the most challenging case with an attacking frequency of 0.01.

READ FULL TEXT
research
10/28/2022

Local Model Reconstruction Attacks in Federated Learning and their Uses

In this paper, we initiate the study of local model reconstruction attac...
research
07/05/2022

Defending against the Label-flipping Attack in Federated Learning

Federated learning (FL) provides autonomy and privacy by design to parti...
research
07/04/2023

Analyzing the vulnerabilities in SplitFed Learning: Assessing the robustness against Data Poisoning Attacks

Distributed Collaborative Machine Learning (DCML) is a potential alterna...
research
02/21/2022

Backdoor Defense in Federated Learning Using Differential Testing and Outlier Detection

The goal of federated learning (FL) is to train one global model by aggr...
research
07/25/2023

Mitigating Cross-client GANs-based Attack in Federated Learning

Machine learning makes multimedia data (e.g., images) more attractive, h...
research
08/02/2021

Information Stealing in Federated Learning Systems Based on Generative Adversarial Networks

An attack on deep learning systems where intelligent machines collaborat...

Please sign up or login with your details

Forgot password? Click here to reset