Towards a Theory of Faithfulness: Faithful Explanations of Differentiable Classifiers over Continuous Data

by   Nico Potyka, et al.

There is broad agreement in the literature that explanation methods should be faithful to the model that they explain, but faithfulness remains a rather vague term. We revisit faithfulness in the context of continuous data and propose two formal definitions of faithfulness for feature attribution methods. Qualitative faithfulness demands that scores reflect the true qualitative effect (positive vs. negative) of the feature on the model and quanitative faithfulness that the magnitude of scores reflect the true quantitative effect. We discuss under which conditions these requirements can be satisfied to which extent (local vs global). As an application of the conceptual idea, we look at differentiable classifiers over continuous data and characterize Gradient-scores as follows: every qualitatively faithful feature attribution method is qualitatively equivalent to Gradient-scores. Furthermore, if an attribution method is quantitatively faithful in the sense that changes of the output of the classifier are proportional to the scores of features, then it is either equivalent to gradient-scoring or it is based on an inferior approximation of the classifier. To illustrate the practical relevance of the theory, we experimentally demonstrate that popular attribution methods can fail to give faithful explanations in the setting where the data is continuous and the classifier differentiable.


page 1

page 2

page 3

page 4


Show or Suppress? Managing Input Uncertainty in Machine Learning Model Explanations

Feature attribution is widely used in interpretable machine learning to ...

Pre or Post-Softmax Scores in Gradient-based Attribution Methods, What is Best?

Gradient based attribution methods for neural networks working as classi...

Discriminative Attribution from Counterfactuals

We present a method for neural network interpretability by combining fea...

GANMEX: One-vs-One Attributions using GAN-based Model Explainability

Attribution methods have been shown as promising approaches for identify...

A Unified Taylor Framework for Revisiting Attribution Methods

Attribution methods have been developed to understand the decision makin...

Stability Guarantees for Feature Attributions with Multiplicative Smoothing

Explanation methods for machine learning models tend to not provide any ...

Removing input features via a generative model to explain their attributions to classifier's decisions

Interpretability methods often measure the contribution of an input feat...

Please sign up or login with your details

Forgot password? Click here to reset