Diffusion Visual Counterfactual Explanations

by   Maximilian Augustin, et al.

Visual Counterfactual Explanations (VCEs) are an important tool to understand the decisions of an image classifier. They are 'small' but 'realistic' semantic changes of the image changing the classifier decision. Current approaches for the generation of VCEs are restricted to adversarially robust models and often contain non-realistic artefacts, or are limited to image classification problems with few classes. In this paper, we overcome this by generating Diffusion Visual Counterfactual Explanations (DVCEs) for arbitrary ImageNet classifiers via a diffusion process. Two modifications to the diffusion process are key for our DVCEs: first, an adaptive parameterization, whose hyperparameters generalize across images and models, together with distance regularization and late start of the diffusion process, allow us to generate images with minimal semantic changes to the original ones but different classification. Second, our cone regularization via an adversarially robust model ensures that the diffusion process does not converge to trivial non-semantic changes, but instead produces realistic images of the target class which achieve high confidence by the classifier.


page 9

page 17

page 20

page 23

page 25

page 27

page 28

page 29


Sparse Visual Counterfactual Explanations in Image Space

Visual counterfactual explanations (VCEs) in image space are an importan...

Diffusion Models for Counterfactual Explanations

Counterfactual explanations have shown promising results as a post-hoc f...

STEEX: Steering Counterfactual Explanations with Semantics

As deep learning models are increasingly used in safety-critical applica...

Removing input features via a generative model to explain their attributions to classifier's decisions

Interpretability methods often measure the contribution of an input feat...

DeDUCE: Generating Counterfactual Explanations Efficiently

When an image classifier outputs a wrong class label, it can be helpful ...

Making Heads or Tails: Towards Semantically Consistent Visual Counterfactuals

A visual counterfactual explanation replaces image regions in a query im...

Common Diffusion Noise Schedules and Sample Steps are Flawed

We discover that common diffusion noise schedules do not enforce the las...

Please sign up or login with your details

Forgot password? Click here to reset