DHBE: Data-free Holistic Backdoor Erasing in Deep Neural Networks via Restricted Adversarial Distillation

06/13/2023
by   Zhicong Yan, et al.
0

Backdoor attacks have emerged as an urgent threat to Deep Neural Networks (DNNs), where victim DNNs are furtively implanted with malicious neurons that could be triggered by the adversary. To defend against backdoor attacks, many works establish a staged pipeline to remove backdoors from victim DNNs: inspecting, locating, and erasing. However, in a scenario where a few clean data can be accessible, such pipeline is fragile and cannot erase backdoors completely without sacrificing model accuracy. To address this issue, in this paper, we propose a novel data-free holistic backdoor erasing (DHBE) framework. Instead of the staged pipeline, the DHBE treats the backdoor erasing task as a unified adversarial procedure, which seeks equilibrium between two different competing processes: distillation and backdoor regularization. In distillation, the backdoored DNN is distilled into a proxy model, transferring its knowledge about clean data, yet backdoors are simultaneously transferred. In backdoor regularization, the proxy model is holistically regularized to prevent from infecting any possible backdoor transferred from distillation. These two processes jointly proceed with data-free adversarial optimization until a clean, high-accuracy proxy model is obtained. With the novel adversarial design, our framework demonstrates its superiority in three aspects: 1) minimal detriment to model accuracy, 2) high tolerance for hyperparameters, and 3) no demand for clean data. Extensive experiments on various backdoor attacks and datasets are performed to verify the effectiveness of the proposed framework. Code is available at <https://github.com/yanzhicong/DHBE>

READ FULL TEXT
research
01/15/2021

Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks

Deep neural networks (DNNs) are known vulnerable to backdoor attacks, a ...
research
03/13/2023

Backdoor Defense via Deconfounded Representation Learning

Deep neural networks (DNNs) are recently shown to be vulnerable to backd...
research
10/22/2021

Anti-Backdoor Learning: Training Clean Models on Poisoned Data

Backdoor attack has emerged as a major security threat to deep neural ne...
research
09/10/2023

DAD++: Improved Data-free Test Time Adversarial Defense

With the increasing deployment of deep neural networks in safety-critica...
research
05/24/2022

Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free

Trojan attacks threaten deep neural networks (DNNs) by poisoning them to...
research
10/27/2021

Adversarial Neuron Pruning Purifies Backdoored Deep Models

As deep neural networks (DNNs) are growing larger, their requirements fo...
research
01/26/2023

Distilling Cognitive Backdoor Patterns within an Image

This paper proposes a simple method to distill and detect backdoor patte...

Please sign up or login with your details

Forgot password? Click here to reset