Model-Contrastive Learning for Backdoor Defense

05/09/2022
by   Zhihao Yue, et al.
10

Along with the popularity of Artificial Intelligence (AI) techniques, an increasing number of backdoor injection attacks are designed to maliciously threaten Deep Neural Networks (DNNs) deployed in safety-critical systems. Although there exist various defense methods that can effectively erase backdoor triggers from DNNs, they still greatly suffer from a non-negligible Attack Success Rate (ASR) as well as a major loss in benign accuracy. Inspired by the observation that a backdoored DNN will form new clusters in its feature space for poisoned data, in this paper we propose a novel backdoor defense method named MCL based on model-contrastive learning. Specifically, model-contrastive learning to implement backdoor defense consists of two steps. First, we use the backdoor attack trigger synthesis technique to invert the trigger. Next, the inversion trigger is used to construct poisoned data, so that model-contrastive learning can be used, which makes the feature representations of poisoned data close to that of the benign data while staying away from the original poisoned feature representations. Through extensive experiments against five start-of-the-art attack methods on multiple benchmark datasets, using only 5 backdoor threats while maintaining higher accuracy of benign data. MCL can make the benign accuracy degenerate by less than 1

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset