FakeSpotter: A Simple Baseline for Spotting AI-Synthesized Fake Faces

09/13/2019
by   Run Wang, et al.
31

In recent years, we have witnessed the unprecedented success of generative adversarial networks (GANs) and its variants in image synthesis. These techniques are widely adopted in synthesizing fake faces which poses a serious challenge to existing face recognition (FR) systems and brings potential security threats to social networks and media as the fakes spread and fuel the misinformation. Unfortunately, robust detectors of these AI-synthesized fake faces are still in their infancy and are not ready to fully tackle this emerging challenge. Currently, image forensic-based and learning-based approaches are the two major categories of strategies in detecting fake faces. In this work, we propose an alternative category of approaches based on monitoring neuron behavior. The studies on neuron coverage and interactions have successfully shown that they can be served as testing criteria for deep learning systems, especially under the settings of being exposed to adversarial attacks. Here, we conjecture that monitoring neuron behavior can also serve as an asset in detecting fake faces since layer-by-layer neuron activation patterns may capture more subtle features that are important for the fake detector. Empirically, we have shown that the proposed FakeSpotter, based on neuron coverage behavior, in tandem with a simple linear classifier can greatly outperform deeply trained convolutional neural networks (CNNs) for spotting AI-synthesized fake faces. Extensive experiments carried out on three deep learning (DL) based FR systems, with two GAN variants for synthesizing fake faces, and on two public high-resolution face datasets have demonstrated the potential of the FakeSpotter serving as a simple, yet robust baseline for fake face detection in the wild.

READ FULL TEXT

page 2

page 6

page 13

research
05/28/2020

DeepSonar: Towards Effective and Robust Detection of AI-Synthesized Fake Voices

With the recent advances in voice synthesis, AI-synthesized fake voices ...
research
06/21/2019

Hiding Faces in Plain Sight: Disrupting AI Face Synthesis with Adversarial Perturbations

Recent years have seen fast development in synthesizing realistic human ...
research
02/01/2020

Global Texture Enhancement for Fake Face Detection in the Wild

Generative Adversarial Networks (GANs) can generate realistic fake face ...
research
06/12/2020

Defending against GAN-based Deepfake Attacks via Transformation-aware Adversarial Faces

Deepfake represents a category of face-swapping attacks that leverage ma...
research
01/28/2022

Detection of fake faces in videos

: Deep learning methodologies have been used to create applications that...
research
01/24/2021

Fighting deepfakes by detecting GAN DCT anomalies

Synthetic multimedia content created through AI technologies, such as Ge...
research
12/31/2019

Automated Testing for Deep Learning Systems with Differential Behavior Criteria

In this work, we conducted a study on building an automated testing syst...

Please sign up or login with your details

Forgot password? Click here to reset