OCTET: Object-aware Counterfactual Explanations

by   Mehdi Zemni, et al.

Nowadays, deep vision models are being widely deployed in safety-critical applications, e.g., autonomous driving, and explainability of such models is becoming a pressing concern. Among explanation methods, counterfactual explanations aim to find minimal and interpretable changes to the input image that would also change the output of the model to be explained. Such explanations point end-users at the main factors that impact the decision of the model. However, previous methods struggle to explain decision models trained on images with many objects, e.g., urban scenes, which are more difficult to work with but also arguably more critical to explain. In this work, we propose to tackle this issue with an object-centric framework for counterfactual explanation generation. Our method, inspired by recent generative modeling works, encodes the query image into a latent space that is structured in a way to ease object-level manipulations. Doing so, it provides the end-user with control over which search directions (e.g., spatial displacement of objects, style modification, etc.) are to be explored during the counterfactual generation. We conduct a set of experiments on counterfactual explanation benchmarks for driving scenes, and we show that our method can be adapted beyond classification, e.g., to explain semantic segmentation models. To complete our analysis, we design and run a user study that measures the usefulness of counterfactual explanations in understanding a decision model. Code is available at https://github.com/valeoai/OCTET.


page 5

page 6

page 7

page 14

page 16

page 17

page 18

page 19


STEEX: Steering Counterfactual Explanations with Semantics

As deep learning models are increasingly used in safety-critical applica...

LIMEcraft: Handcrafted superpixel selection and inspection for Visual eXplanations

The increased interest in deep learning applications, and their hard-to-...

Counterfactual Explainer Framework for Deep Reinforcement Learning Models Using Policy Distillation

Deep Reinforcement Learning (DRL) has demonstrated promising capability ...

Which Style Makes Me Attractive? Interpretable Control Discovery and Counterfactual Explanation on StyleGAN

The semantically disentangled latent subspace in GAN provides rich inter...

SAFE: Saliency-Aware Counterfactual Explanations for DNN-based Automated Driving Systems

A CF explainer identifies the minimum modifications in the input that wo...

Calculating and Visualizing Counterfactual Feature Importance Values

Despite the success of complex machine learning algorithms, mostly justi...

For Better or Worse: The Impact of Counterfactual Explanations' Directionality on User Behavior in xAI

Counterfactual explanations (CFEs) are a popular approach in explainable...

Please sign up or login with your details

Forgot password? Click here to reset