Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations

by   Alexander Binder, et al.

While the evaluation of explanations is an important step towards trustworthy models, it needs to be done carefully, and the employed metrics need to be well-understood. Specifically model randomization testing is often overestimated and regarded as a sole criterion for selecting or discarding certain explanation methods. To address shortcomings of this test, we start by observing an experimental gap in the ranking of explanation methods between randomization-based sanity checks [1] and model output faithfulness measures (e.g. [25]). We identify limitations of model-randomization-based sanity checks for the purpose of evaluating explanations. Firstly, we show that uninformative attribution maps created with zero pixel-wise covariance easily achieve high scores in this type of checks. Secondly, we show that top-down model randomization preserves scales of forward pass activations with high probability. That is, channels with large activations have a high probility to contribute strongly to the output, even after randomization of the network on top of them. Hence, explanations after randomization can only be expected to differ to a certain extent. This explains the observed experimental gap. In summary, these results demonstrate the inadequacy of model-randomization-based sanity checks as a criterion to rank attribution methods.


page 5

page 8


Rethinking Stability for Attribution-based Explanations

As attribution-based explanation methods are increasingly used to establ...

Do Explanations Explain? Model Knows Best

It is a mystery which input features contribute to a neural network's ou...

GANMEX: One-vs-One Attributions using GAN-based Model Explainability

Attribution methods have been shown as promising approaches for identify...

Correcting Classification: A Bayesian Framework Using Explanation Feedback to Improve Classification Abilities

Neural networks (NNs) have shown high predictive performance, however, w...

Bayesian Criterion for Re-randomization

Re-randomization has gained popularity as a tool for experiment-based ca...

Finding the right XAI method – A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate Science

Explainable artificial intelligence (XAI) methods shed light on the pred...

Please sign up or login with your details

Forgot password? Click here to reset