Do Explanations Explain? Model Knows Best

03/04/2022
by   Ashkan Khakzar, et al.
0

It is a mystery which input features contribute to a neural network's output. Various explanation (feature attribution) methods are proposed in the literature to shed light on the problem. One peculiar observation is that these explanations (attributions) point to different features as being important. The phenomenon raises the question, which explanation to trust? We propose a framework for evaluating the explanations using the neural network model itself. The framework leverages the network to generate input features that impose a particular behavior on the output. Using the generated features, we devise controlled experimental setups to evaluate whether an explanation method conforms to an axiom. Thus we propose an empirical framework for axiomatic evaluation of explanation methods. We evaluate well-known and promising explanation solutions using the proposed framework. The framework provides a toolset to reveal properties and drawbacks within existing and future explanation solutions.

READ FULL TEXT

page 7

page 8

page 9

page 13

research
03/14/2022

Rethinking Stability for Attribution-based Explanations

As attribution-based explanation methods are increasingly used to establ...
research
03/01/2019

Aggregating explainability methods for neural networks stabilizes explanations

Despite a growing literature on explaining neural networks, no consensus...
research
06/19/2019

Explanations can be manipulated and geometry is to blame

Explanation methods aim to make neural networks more trustworthy and int...
research
05/01/2020

Evaluating and Aggregating Feature-based Model Explanations

A feature-based model explanation denotes how much each input feature co...
research
09/05/2022

"Is your explanation stable?": A Robustness Evaluation Framework for Feature Attribution

Understanding the decision process of neural networks is hard. One vital...
research
07/25/2019

How to Manipulate CNNs to Make Them Lie: the GradCAM Case

Recently many methods have been introduced to explain CNN decisions. How...
research
11/22/2022

Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations

While the evaluation of explanations is an important step towards trustw...

Please sign up or login with your details

Forgot password? Click here to reset