DiffSmooth: Certifiably Robust Learning via Diffusion Models and Local Smoothing

by   Jiawei Zhang, et al.

Diffusion models have been leveraged to perform adversarial purification and thus provide both empirical and certified robustness for a standard model. On the other hand, different robustly trained smoothed models have been studied to improve the certified robustness. Thus, it raises a natural question: Can diffusion model be used to achieve improved certified robustness on those robustly trained smoothed models? In this work, we first theoretically show that recovered instances by diffusion models are in the bounded neighborhood of the original instance with high probability; and the "one-shot" denoising diffusion probabilistic models (DDPM) can approximate the mean of the generated distribution of a continuous-time diffusion model, which approximates the original instance under mild conditions. Inspired by our analysis, we propose a certifiably robust pipeline DiffSmooth, which first performs adversarial purification via diffusion models and then maps the purified instances to a common region via a simple yet effective local smoothing strategy. We conduct extensive experiments on different datasets and show that DiffSmooth achieves SOTA-certified robustness compared with eight baselines. For instance, DiffSmooth improves the SOTA-certified accuracy from 36.0% to 53.0% under ℓ_2 radius 1.5 on ImageNet. The code is available at [https://github.com/javyduck/DiffSmooth].


page 1

page 4

page 13


Better Diffusion Models Further Improve Adversarial Training

It has been recognized that the data generated by the denoising diffusio...

Double Sampling Randomized Smoothing

Neural networks (NNs) are known to be vulnerable against adversarial per...

Guided Diffusion Model for Adversarial Purification from Random Noise

In this paper, we propose a novel guided diffusion purification approach...

DensePure: Understanding Diffusion Models towards Adversarial Robustness

Diffusion models have been recently employed to improve certified robust...

TrojDiff: Trojan Attacks on Diffusion Models with Diverse Targets

Diffusion models have achieved great success in a range of tasks, such a...

Unsupervised 3D out-of-distribution detection with latent diffusion models

Methods for out-of-distribution (OOD) detection that scale to 3D data ar...

Double Bubble, Toil and Trouble: Enhancing Certified Robustness through Transitivity

In response to subtle adversarial examples flipping classifications of n...

Please sign up or login with your details

Forgot password? Click here to reset