FakeSpotter: A Simple Baseline for Spotting AI-Synthesized Fake Faces

09/13/2019
by   Run Wang, et al.
31

In recent years, we have witnessed the unprecedented success of generative adversarial networks (GANs) and its variants in image synthesis. These techniques are widely adopted in synthesizing fake faces which poses a serious challenge to existing face recognition (FR) systems and brings potential security threats to social networks and media as the fakes spread and fuel the misinformation. Unfortunately, robust detectors of these AI-synthesized fake faces are still in their infancy and are not ready to fully tackle this emerging challenge. Currently, image forensic-based and learning-based approaches are the two major categories of strategies in detecting fake faces. In this work, we propose an alternative category of approaches based on monitoring neuron behavior. The studies on neuron coverage and interactions have successfully shown that they can be served as testing criteria for deep learning systems, especially under the settings of being exposed to adversarial attacks. Here, we conjecture that monitoring neuron behavior can also serve as an asset in detecting fake faces since layer-by-layer neuron activation patterns may capture more subtle features that are important for the fake detector. Empirically, we have shown that the proposed FakeSpotter, based on neuron coverage behavior, in tandem with a simple linear classifier can greatly outperform deeply trained convolutional neural networks (CNNs) for spotting AI-synthesized fake faces. Extensive experiments carried out on three deep learning (DL) based FR systems, with two GAN variants for synthesizing fake faces, and on two public high-resolution face datasets have demonstrated the potential of the FakeSpotter serving as a simple, yet robust baseline for fake face detection in the wild.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset