Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers

10/01/2020
by   Hanjie Chen, et al.
0

To build an interpretable neural text classifier, most of the prior work has focused on designing inherently interpretable models or finding faithful explanations. A new line of work on improving model interpretability has just started, and many existing methods require either prior information or human annotations as additional inputs in training. To address this limitation, we propose the variational word mask (VMASK) method to automatically learn task-specific important words and reduce irrelevant information on classification, which ultimately improves the interpretability of model predictions. The proposed method is evaluated with three neural text classifiers (CNN, LSTM, and BERT) on seven benchmark text classification datasets. Experiments show the effectiveness of VMASK in improving both model prediction accuracy and interpretability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/04/2020

Generating Hierarchical Explanations on Text Classification via Feature Interaction Detection

Generating explanations for neural networks has become crucial for their...
research
05/20/2019

Interpretable Neural Predictions with Differentiable Binary Variables

The success of neural networks comes hand in hand with a desire for more...
research
12/08/2020

Efficient Estimation of Influence of a Training Instance

Understanding the influence of a training instance on a neural network m...
research
09/10/2019

Improving the Interpretability of Neural Sentiment Classifiers via Data Augmentation

Recent progress of neural network models has achieved remarkable perform...
research
02/03/2023

Improving Interpretability via Explicit Word Interaction Graph Layer

Recent NLP literature has seen growing interest in improving model inter...
research
11/10/2020

DoLFIn: Distributions over Latent Features for Interpretability

Interpreting the inner workings of neural models is a key step in ensuri...
research
04/09/2021

Explaining Neural Network Predictions on Sentence Pairs via Learning Word-Group Masks

Explaining neural network models is important for increasing their trust...

Please sign up or login with your details

Forgot password? Click here to reset