Regularized adversarial examples for model interpretability

by   Yoel Shoshan, et al.

As machine learning algorithms continue to improve, there is an increasing need for explaining why a model produces a certain prediction for a certain input. In recent years, several methods for model interpretability have been developed, aiming to provide explanation of which subset regions of the model input is the main reason for the model prediction. In parallel, a significant research community effort is occurring in recent years for developing adversarial example generation methods for fooling models, while not altering the true label of the input,as it would have been classified by a human annotator. In this paper, we bridge the gap between adversarial example generation and model interpretability, and introduce a modification to the adversarial example generation process which encourages better interpretability. We analyze the proposed method on a public medical imaging dataset, both quantitatively and qualitatively, and show that it significantly outperforms the leading known alternative method. Our suggested method is simple to implement, and can be easily plugged into most common adversarial example generation frameworks. Additionally, we propose an explanation quality metric - APE - "Adversarial Perturbative Explanation", which measures how well an explanation describes model decisions.


page 5

page 7


Towards Interpretable Deep Neural Networks by Leveraging Adversarial Examples

Sometimes it is not enough for a DNN to produce an outcome. For example,...

ExAD: An Ensemble Approach for Explanation-based Adversarial Detection

Recent research has shown Deep Neural Networks (DNNs) to be vulnerable t...

Combining Similarity and Adversarial Learning to Generate Visual Explanation: Application to Medical Image Classification

Explaining decisions of black-box classifiers is paramount in sensitive ...

An Evaluation of the Human-Interpretability of Explanation

Recent years have seen a boom in interest in machine learning systems th...

Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small

Research in mechanistic interpretability seeks to explain behaviors of m...

The Solvability of Interpretability Evaluation Metrics

Feature attribution methods are popular for explaining neural network pr...

Please sign up or login with your details

Forgot password? Click here to reset