Estimating the Adversarial Robustness of Attributions in Text with Transformers

by   Adam Ivankay, et al.

Explanations are crucial parts of deep neural network (DNN) classifiers. In high stakes applications, faithful and robust explanations are important to understand and gain trust in DNN classifiers. However, recent work has shown that state-of-the-art attribution methods in text classifiers are susceptible to imperceptible adversarial perturbations that alter explanations significantly while maintaining the correct prediction outcome. If undetected, this can critically mislead the users of DNNs. Thus, it is crucial to understand the influence of such adversarial perturbations on the networks' explanations and their perceptibility. In this work, we establish a novel definition of attribution robustness (AR) in text classification, based on Lipschitz continuity. Crucially, it reflects both attribution change induced by adversarial input alterations and perceptibility of such alterations. Moreover, we introduce a wide set of text similarity measures to effectively capture locality between two text samples and imperceptibility of adversarial perturbations in text. We then propose our novel TransformerExplanationAttack (TEA), a strong adversary that provides a tight estimation for attribution robustness in text classification. TEA uses state-of-the-art language models to extract word substitutions that result in fluent, contextual adversarial samples. Finally, with experiments on several text classification architectures, we show that TEA consistently outperforms current state-of-the-art AR estimators, yielding perturbations that alter explanations to a greater extent while being more fluent and less perceptible.


page 13

page 14

page 15

page 16

page 19

page 20

page 21

page 22


Fooling Explanations in Text Classifiers

State-of-the-art text classification models are becoming increasingly re...

Improving Gradient-based Adversarial Training for Text Classification by Contrastive Learning and Auto-Encoder

Recent work has proposed several efficient approaches for generating gra...

DARE: Towards Robust Text Explanations in Biomedical and Healthcare Applications

Along with the successful deployment of deep neural networks in several ...

Unifying Model Explainability and Robustness for Joint Text Classification and Rationale Extraction

Recent works have shown explainability and robustness are two crucial in...

Generating Hierarchical Explanations on Text Classification Without Connecting Rules

The opaqueness of deep NLP models has motivated the development of metho...

SafeAMC: Adversarial training for robust modulation recognition models

In communication systems, there are many tasks, like modulation recognit...

FAR: A General Framework for Attributional Robustness

Attribution maps have gained popularity as tools for explaining neural n...

Please sign up or login with your details

Forgot password? Click here to reset