Sparse Visual Counterfactual Explanations in Image Space

by   Valentyn Boreiko, et al.

Visual counterfactual explanations (VCEs) in image space are an important tool to understand decisions of image classifiers as they show under which changes of the image the decision of the classifier would change. Their generation in image space is challenging and requires robust models due to the problem of adversarial examples. Existing techniques to generate VCEs in image space suffer from spurious changes in the background. Our novel perturbation model for VCEs together with its efficient optimization via our novel Auto-Frank-Wolfe scheme yields sparse VCEs which are significantly more object-centric. Moreover, we show that VCEs can be used to detect undesired behavior of ImageNet classifiers due to spurious features in the ImageNet dataset and discuss how estimates of the data-generating distribution can be used for VCEs.


page 12

page 14

page 19

page 20

page 21

page 23

page 24

page 25


Diffusion Visual Counterfactual Explanations

Visual Counterfactual Explanations (VCEs) are an important tool to under...

Removing input features via a generative model to explain their attributions to classifier's decisions

Interpretability methods often measure the contribution of an input feat...

Counterfactual Generation with Knockoffs

Human interpretability of deep neural networks' decisions is crucial, es...

Learned Visual Features to Textual Explanations

Interpreting the learned features of vision models has posed a longstand...

DeDUCE: Generating Counterfactual Explanations Efficiently

When an image classifier outputs a wrong class label, it can be helpful ...

On Quantitative Evaluations of Counterfactuals

As counterfactual examples become increasingly popular for explaining de...

Measuring and improving the quality of visual explanations

The ability of to explain neural network decisions goes hand in hand wit...

Please sign up or login with your details

Forgot password? Click here to reset