Defense Against Explanation Manipulation

11/08/2021
by   Ruixiang Tang, et al.
23

Explainable machine learning attracts increasing attention as it improves transparency of models, which is helpful for machine learning to be trusted in real applications. However, explanation methods have recently been demonstrated to be vulnerable to manipulation, where we can easily change a model's explanation while keeping its prediction constant. To tackle this problem, some efforts have been paid to use more stable explanation methods or to change model configurations. In this work, we tackle the problem from the training perspective, and propose a new training scheme called Adversarial Training on EXplanations (ATEX) to improve the internal explanation stability of a model regardless of the specific explanation method being applied. Instead of directly specifying explanation values over data instances, ATEX only puts requirement on model predictions which avoids involving second-order derivatives in optimization. As a further discussion, we also find that explanation stability is closely related to another property of the model, i.e., the risk of being exposed to adversarial attack. Through experiments, besides showing that ATEX improves model robustness against manipulation targeting explanation, it also brings additional benefits including smoothing explanations and improving the efficacy of adversarial training if applied to the model.

READ FULL TEXT
research
03/14/2022

Rethinking Stability for Attribution-based Explanations

As attribution-based explanation methods are increasingly used to establ...
research
06/24/2022

Robustness of Explanation Methods for NLP Models

Explanation methods have emerged as an important tool to highlight the f...
research
07/13/2020

A simple defense against adversarial attacks on heatmap explanations

With machine learning models being used for more sensitive applications,...
research
09/05/2022

"Is your explanation stable?": A Robustness Evaluation Framework for Feature Attribution

Understanding the decision process of neural networks is hard. One vital...
research
07/25/2019

How to Manipulate CNNs to Make Them Lie: the GradCAM Case

Recently many methods have been introduced to explain CNN decisions. How...
research
03/07/2022

Robustness and Usefulness in AI Explanation Methods

Explainability in machine learning has become incredibly important as ma...
research
04/20/2022

Backdooring Explainable Machine Learning

Explainable machine learning holds great potential for analyzing and und...

Please sign up or login with your details

Forgot password? Click here to reset