PTW: Pivotal Tuning Watermarking for Pre-Trained Image Generators

04/14/2023
by   Nils Lukas, et al.
0

Deepfakes refer to content synthesized using deep generators, which, when misused, have the potential to erode trust in digital media. Synthesizing high-quality deepfakes requires access to large and complex generators only few entities can train and provide. The threat are malicious users that exploit access to the provided model and generate harmful deepfakes without risking detection. Watermarking makes deepfakes detectable by embedding an identifiable code into the generator that is later extractable from its generated images. We propose Pivotal Tuning Watermarking (PTW), a method for watermarking pre-trained generators (i) three orders of magnitude faster than watermarking from scratch and (ii) without the need for any training data. We improve existing watermarking methods and scale to generators 4 × larger than related work. PTW can embed longer codes than existing methods while better preserving the generator's image quality. We propose rigorous, game-based definitions for robustness and undetectability and our study reveals that watermarking is not robust against an adaptive white-box attacker who has control over the generator's parameters. We propose an adaptive attack that can successfully remove any watermarking with access to only 200 non-watermarked images. Our work challenges the trustworthiness of watermarking for deepfake detection when the parameters of a generator are available.

READ FULL TEXT

page 2

page 9

research
03/06/2023

MotionVideoGAN: A Novel Video Generator Based on the Motion Space Learned from Image Pairs

Video generation has achieved rapid progress benefiting from high-qualit...
research
01/01/2019

Transfer learning from language models to image caption generators: Better models may not transfer better

When designing a neural caption generator, a convolutional neural networ...
research
08/04/2023

Vulnerabilities in AI Code Generators: Exploring Targeted Data Poisoning Attacks

In this work, we assess the security of AI code generators via data pois...
research
12/08/2022

Diffusion Guided Domain Adaptation of Image Generators

Can a text-to-image diffusion model be used as a training objective for ...
research
03/09/2021

BROOD: Bilevel and Robust Optimization and Outlier Detection for Efficient Tuning of High-Energy Physics Event Generators

The parameters in Monte Carlo (MC) event generators are tuned on experim...
research
03/20/2020

Data-Free Knowledge Amalgamation via Group-Stack Dual-GAN

Recent advances in deep learning have provided procedures for learning o...

Please sign up or login with your details

Forgot password? Click here to reset