VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models

by   Sheng Yen Chou, et al.

Diffusion Models (DMs) are state-of-the-art generative models that learn a reversible corruption process from iterative noise addition and denoising. They are the backbone of many generative AI applications, such as text-to-image conditional generation. However, recent studies have shown that basic unconditional DMs (e.g., DDPM and DDIM) are vulnerable to backdoor injection, a type of output manipulation attack triggered by a maliciously embedded pattern at model input. This paper presents a unified backdoor attack framework (VillanDiffusion) to expand the current scope of backdoor analysis for DMs. Our framework covers mainstream unconditional and conditional DMs (denoising-based and score-based) and various training-free samplers for holistic evaluations. Experiments show that our unified framework facilitates the backdoor analysis of different DM configurations and provides new insights into caption-based backdoor attacks on DMs.


Understanding Diffusion Models: A Unified Perspective

Diffusion models have shown incredible capabilities as generative models...

BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models

The rise in popularity of text-to-image generative artificial intelligen...

How to Backdoor Diffusion Models?

Diffusion models are state-of-the-art deep learning empowered generative...

Dynamic Dual-Output Diffusion Models

Iterative denoising-based generation, also known as denoising diffusion ...

Membership Inference Attacks against Diffusion Models

Diffusion models have attracted attention in recent years as innovative ...

PolyDiffuse: Polygonal Shape Reconstruction via Guided Set Diffusion Models

This paper presents PolyDiffuse, a novel structured reconstruction algor...

Please sign up or login with your details

Forgot password? Click here to reset