Demon in the Variant: Statistical Analysis of DNNs for Robust Backdoor Contamination Detection
A security threat to deep neural networks (DNN) is backdoor contamination, in which an adversary poisons the training data of a target model to inject a Trojan so that images carrying a specific trigger will always be classified into a specific label. Prior research on this problem assumes the dominance of the trigger in an image's representation, which causes any image with the trigger to be recognized as a member in the target class. Such a trigger also exhibits unique features in the representation space and can therefore be easily separated from legitimate images. Our research, however, shows that simple target contamination can cause the representation of an attack image to be less distinguishable from that of legitimate ones, thereby evading existing defenses against the backdoor infection. In our research, we show that such a contamination attack actually subtly changes the representation distribution for the target class, which can be captured by a statistic analysis. More specifically, we leverage an EM algorithm to decompose an image into its identity part (e.g., person, traffic sign) and variation part within a class (e.g., lighting, poses). Then we analyze the distribution in each class, identifying those more likely to be characterized by a mixture model resulted from adding attack samples to the legitimate image pool. Our research shows that this new technique effectively detects data contamination attacks, including the new one we propose, and is also robust against the evasion attempts made by a knowledgeable adversary.
READ FULL TEXT