Testing Human Ability To Detect Deepfake Images of Human Faces

12/07/2022
by   Sergi D. Bray, et al.
0

Deepfakes are computationally-created entities that falsely represent reality. They can take image, video, and audio modalities, and pose a threat to many areas of systems and societies, comprising a topic of interest to various aspects of cybersecurity and cybersafety. In 2020 a workshop consulting AI experts from academia, policing, government, the private sector, and state security agencies ranked deepfakes as the most serious AI threat. These experts noted that since fake material can propagate through many uncontrolled routes, changes in citizen behaviour may be the only effective defence. This study aims to assess human ability to identify image deepfakes of human faces (StyleGAN2:FFHQ) from nondeepfake images (FFHQ), and to assess the effectiveness of simple interventions intended to improve detection accuracy. Using an online survey, 280 participants were randomly allocated to one of four groups: a control group, and 3 assistance interventions. Each participant was shown a sequence of 20 images randomly selected from a pool of 50 deepfake and 50 real images of human faces. Participants were asked if each image was AI-generated or not, to report their confidence, and to describe the reasoning behind each response. Overall detection accuracy was only just above chance and none of the interventions significantly improved this. Participants' confidence in their answers was high and unrelated to accuracy. Assessing the results on a per-image basis reveals participants consistently found certain images harder to label correctly, but reported similarly high confidence regardless of the image. Thus, although participant accuracy was 62 across images ranged quite evenly between 85 below 50 that there is a need for an urgent call to action to address this threat.

READ FULL TEXT

page 2

page 4

page 5

page 8

page 13

page 16

page 18

page 24

research
07/06/2019

Human detection of machine manipulated media

Recent advances in neural networks for content generation enable artific...
research
04/25/2023

Seeing is not always believing: A Quantitative Study on Human Perception of AI-Generated Images

Photos serve as a way for humans to record what they experience in their...
research
02/12/2022

Uncalibrated Models Can Improve Human-AI Collaboration

In many practical applications of AI, an AI model is used as a decision ...
research
04/10/2023

Artifact magnification on deepfake videos increases human detection and subjective confidence

The development of technologies for easily and automatically falsifying ...
research
03/16/2023

Protecting Society from AI Misuse: When are Restrictions on Capabilities Warranted?

Artificial intelligence (AI) systems will increasingly be used to cause ...
research
08/01/2023

Confidence-Building Measures for Artificial Intelligence: Workshop Proceedings

Foundation models could eventually introduce several pathways for underm...

Please sign up or login with your details

Forgot password? Click here to reset