"Is your explanation stable?": A Robustness Evaluation Framework for Feature Attribution

by   Yuyou Gan, et al.

Understanding the decision process of neural networks is hard. One vital method for explanation is to attribute its decision to pivotal features. Although many algorithms are proposed, most of them solely improve the faithfulness to the model. However, the real environment contains many random noises, which may leads to great fluctuations in the explanations. More seriously, recent works show that explanation algorithms are vulnerable to adversarial attacks. All of these make the explanation hard to trust in real scenarios. To bridge this gap, we propose a model-agnostic method Median Test for Feature Attribution (MeTFA) to quantify the uncertainty and increase the stability of explanation algorithms with theoretical guarantees. MeTFA has the following two functions: (1) examine whether one feature is significantly important or unimportant and generate a MeTFA-significant map to visualize the results; (2) compute the confidence interval of a feature attribution score and generate a MeTFA-smoothed map to increase the stability of the explanation. Experiments show that MeTFA improves the visual quality of explanations and significantly reduces the instability while maintaining the faithfulness. To quantitatively evaluate the faithfulness of an explanation under different noise settings, we further propose several robust faithfulness metrics. Experiment results show that the MeTFA-smoothed explanation can significantly increase the robust faithfulness. In addition, we use two scenarios to show MeTFA's potential in the applications. First, when applied to the SOTA explanation method to locate context bias for semantic segmentation models, MeTFA-significant explanations use far smaller regions to maintain 99%+ faithfulness. Second, when tested with different explanation-oriented attacks, MeTFA can help defend vanilla, as well as adaptive, adversarial attacks against explanations.


page 2

page 5

page 7

page 10

page 12

page 18


Rethinking Stability for Attribution-based Explanations

As attribution-based explanation methods are increasingly used to establ...

Do Explanations Explain? Model Knows Best

It is a mystery which input features contribute to a neural network's ou...

Information Bottleneck Attribution for Visual Explanations of Diagnosis and Prognosis

Visual explanation methods have an important role in the prognosis of th...

NoiseGrad: enhancing explanations by introducing stochasticity to model weights

Attribution methods remain a practical instrument that is used in real-w...

Defense Against Explanation Manipulation

Explainable machine learning attracts increasing attention as it improve...

Generalizing Adversarial Explanations with Grad-CAM

Gradient-weighted Class Activation Mapping (Grad- CAM), is an example-ba...

Robust Ranking Explanations

Gradient-based explanation is the cornerstone of explainable deep networ...

Please sign up or login with your details

Forgot password? Click here to reset