Reconstructive Neuron Pruning for Backdoor Defense

by   Yige Li, et al.

Deep neural networks (DNNs) have been found to be vulnerable to backdoor attacks, raising security concerns about their deployment in mission-critical applications. While existing defense methods have demonstrated promising results, it is still not clear how to effectively remove backdoor-associated neurons in backdoored DNNs. In this paper, we propose a novel defense called Reconstructive Neuron Pruning (RNP) to expose and prune backdoor neurons via an unlearning and then recovering process. Specifically, RNP first unlearns the neurons by maximizing the model's error on a small subset of clean samples and then recovers the neurons by minimizing the model's error on the same data. In RNP, unlearning is operated at the neuron level while recovering is operated at the filter level, forming an asymmetric reconstructive learning procedure. We show that such an asymmetric process on only a few clean samples can effectively expose and prune the backdoor neurons implanted by a wide range of attacks, achieving a new state-of-the-art defense performance. Moreover, the unlearned model at the intermediate step of our RNP can be directly used to improve other backdoor defense tasks including backdoor removal, trigger recovery, backdoor label detection, and backdoor sample detection. Code is available at <>.


page 5

page 9

page 13

page 18


Adversarial Neuron Pruning Purifies Backdoored Deep Models

As deep neural networks (DNNs) are growing larger, their requirements fo...

BaDExpert: Extracting Backdoor Functionality for Accurate Backdoor Input Detection

We present a novel defense, against backdoor attacks on Deep Neural Netw...

Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness

Certifiable robustness is a highly desirable property for adopting deep ...

Backdoor Cleansing with Unlabeled Data

Due to the increasing computational demand of Deep Neural Networks (DNNs...

Anti-Backdoor Learning: Training Clean Models on Poisoned Data

Backdoor attack has emerged as a major security threat to deep neural ne...

Few-shot Backdoor Defense Using Shapley Estimation

Deep neural networks have achieved impressive performance in a variety o...

SPECTRE: Defending Against Backdoor Attacks Using Robust Statistics

Modern machine learning increasingly requires training on a large collec...

Please sign up or login with your details

Forgot password? Click here to reset