Artifact magnification on deepfake videos increases human detection and subjective confidence

04/10/2023
by   Emilie Josephs, et al.
0

The development of technologies for easily and automatically falsifying video has raised practical questions about people's ability to detect false information online. How vulnerable are people to deepfake videos? What technologies can be applied to boost their performance? Human susceptibility to deepfake videos is typically measured in laboratory settings, which do not reflect the challenges of real-world browsing. In typical browsing, deepfakes are rare, engagement with the video may be short, participants may be distracted, or the video streaming quality may be degraded. Here, we tested deepfake detection under these ecological viewing conditions, and found that detection was lowered in all cases. Principles from signal detection theory indicated that different viewing conditions affected different dimensions of detection performance. Overall, this suggests that the current literature underestimates people's susceptibility to deepfakes. Next, we examined how computer vision models might be integrated into users' decision process to increase accuracy and confidence during deepfake detection. We evaluated the effectiveness of communicating the model's prediction to the user by amplifying artifacts in fake videos. We found that artifact amplification was highly effective at making fake video distinguishable from real, in a manner that was robust across viewing conditions. Additionally, compared to a traditional text-based prompt, artifact amplification was more convincing: people accepted the model's suggestion more often, and reported higher final confidence in their model-supported decision, particularly for more challenging videos. Overall, this suggests that visual indicators that cause distortions on fake videos may be highly effective at mitigating the impact of falsified video.

READ FULL TEXT

page 1

page 4

research
07/18/2022

Visual Representations of Physiological Signals for Fake Video Detection

Realistic fake videos are a potential tool for spreading harmful misinfo...
research
05/13/2021

Comparing Human and Machine Deepfake Detection with Affective and Holistic Processing

The recent emergence of deepfake videos leads to an important societal q...
research
06/01/2022

Deepfake Caricatures: Amplifying attention to artifacts increases deepfake detection by humans and machines

Deepfakes pose a serious threat to our digital society by fueling the sp...
research
06/20/2022

Practical Deepfake Detection: Vulnerabilities in Global Contexts

Recent advances in deep learning have enabled realistic digital alterati...
research
12/07/2022

Testing Human Ability To Detect Deepfake Images of Human Faces

Deepfakes are computationally-created entities that falsely represent re...
research
06/12/2023

Deepfake in the Metaverse: An Outlook Survey

We envision deepfake technologies, which synthesize realistic fake image...
research
04/03/2020

Sifter: A Hybrid Workflow for Theme-based Video Curation at Scale

User-generated content platforms curate their vast repositories into the...

Please sign up or login with your details

Forgot password? Click here to reset