Explaining Neural Network Predictions on Sentence Pairs via Learning Word-Group Masks

04/09/2021
by   Hanjie Chen, et al.
8

Explaining neural network models is important for increasing their trustworthiness in real-world applications. Most existing methods generate post-hoc explanations for neural network models by identifying individual feature attributions or detecting interactions between adjacent features. However, for models with text pairs as inputs (e.g., paraphrase identification), existing methods are not sufficient to capture feature interactions between two texts and their simple extension of computing all word-pair interactions between two texts is computationally inefficient. In this work, we propose the Group Mask (GMASK) method to implicitly detect word correlations by grouping correlated words from the input text pair together and measure their contribution to the corresponding NLP tasks as a whole. The proposed method is evaluated with two different model architectures (decomposable attention model and BERT) across four datasets, including natural language inference and paraphrase identification tasks. Experiments show the effectiveness of GMASK in providing faithful explanations to these models.

READ FULL TEXT
research
04/04/2020

Generating Hierarchical Explanations on Text Classification via Feature Interaction Detection

Generating explanations for neural networks has become crucial for their...
research
01/11/2022

Explaining Prediction Uncertainty of Pre-trained Language Models by Detecting Uncertain Words in Inputs

Estimating the predictive uncertainty of pre-trained language models is ...
research
10/01/2020

Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers

To build an interpretable neural text classifier, most of the prior work...
research
05/08/2021

On Guaranteed Optimal Robust Explanations for NLP Models

We build on abduction-based explanations for ma-chine learning and devel...
research
11/08/2021

Explaining Face Presentation Attack Detection Using Natural Language

A large number of deep neural network based techniques have been develop...
research
11/08/2019

Towards Hierarchical Importance Attribution: Explaining Compositional Semantics for Neural Sequence Models

The impressive performance of neural networks on natural language proces...
research
03/27/2021

Explaining the Road Not Taken

It is unclear if existing interpretations of deep neural network models ...

Please sign up or login with your details

Forgot password? Click here to reset