BayBFed: Bayesian Backdoor Defense for Federated Learning

01/23/2023
by   Kavita Kumari, et al.
0

Federated learning (FL) allows participants to jointly train a machine learning model without sharing their private data with others. However, FL is vulnerable to poisoning attacks such as backdoor attacks. Consequently, a variety of defenses have recently been proposed, which have primarily utilized intermediary states of the global model (i.e., logits) or distance of the local models (i.e., L2-norm) from the global model to detect malicious backdoors. However, as these approaches directly operate on client updates, their effectiveness depends on factors such as clients' data distribution or the adversary's attack strategies. In this paper, we introduce a novel and more generic backdoor defense framework, called BayBFed, which proposes to utilize probability distributions over client updates to detect malicious updates in FL: it computes a probabilistic measure over the clients' updates to keep track of any adjustments made in the updates, and uses a novel detection algorithm that can leverage this probabilistic measure to efficiently detect and filter out malicious updates. Thus, it overcomes the shortcomings of previous approaches that arise due to the direct usage of client updates; as our probabilistic measure will include all aspects of the local client training strategies. BayBFed utilizes two Bayesian Non-Parametric extensions: (i) a Hierarchical Beta-Bernoulli process to draw a probabilistic measure given the clients' updates, and (ii) an adaptation of the Chinese Restaurant Process (CRP), referred by us as CRP-Jensen, which leverages this probabilistic measure to detect and filter out malicious updates. We extensively evaluate our defense approach on five benchmark datasets: CIFAR10, Reddit, IoT intrusion detection, MNIST, and FMNIST, and show that it can effectively detect and eliminate malicious updates in FL without deteriorating the benign performance of the global model.

READ FULL TEXT

page 1

page 11

page 17

research
07/19/2022

FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients

Federated learning (FL) is vulnerable to model poisoning attacks, in whi...
research
01/06/2021

FLGUARD: Secure and Private Federated Learning

Recently, a number of backdoor attacks against Federated Learning (FL) h...
research
01/08/2022

LoMar: A Local Defense Against Poisoning Attack on Federated Learning

Federated learning (FL) provides a high efficient decentralized machine ...
research
03/31/2023

Secure Federated Learning against Model Poisoning Attacks via Client Filtering

Given the distributed nature, detecting and defending against the backdo...
research
09/11/2021

On the Initial Behavior Monitoring Issues in Federated Learning

In Federated Learning (FL), a group of workers participate to build a gl...
research
07/15/2022

Suppressing Poisoning Attacks on Federated Learning for Medical Imaging

Collaboration among multiple data-owning entities (e.g., hospitals) can ...
research
07/25/2022

Technical Report: Assisting Backdoor Federated Learning with Whole Population Knowledge Alignment

Due to the distributed nature of Federated Learning (FL), researchers ha...

Please sign up or login with your details

Forgot password? Click here to reset