On the Lack of Robust Interpretability of Neural Text Classifiers

06/08/2021
by   Muhammad Bilal Zafar, et al.
0

With the ever-increasing complexity of neural language models, practitioners have turned to methods for understanding the predictions of these models. One of the most well-adopted approaches for model interpretability is feature-based interpretability, i.e., ranking the features in terms of their impact on model predictions. Several prior studies have focused on assessing the fidelity of feature-based interpretability methods, i.e., measuring the impact of dropping the top-ranked features on the model output. However, relatively little work has been conducted on quantifying the robustness of interpretations. In this work, we assess the robustness of interpretations of neural text classifiers, specifically, those based on pretrained Transformer encoders, using two randomization tests. The first compares the interpretations of two models that are identical except for their initializations. The second measures whether the interpretations differ between a model with trained parameters and a model with random parameters. Both tests show surprising deviations from expected behavior, raising questions about the extent of insights that practitioners may draw from interpretations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/23/2021

More Than Words: Towards Better Quality Interpretations of Text Classifiers

The large size and complex decision mechanisms of state-of-the-art text ...
research
03/02/2021

Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations

While the need for interpretable machine learning has been established, ...
research
04/12/2021

Evaluating Saliency Methods for Neural Language Models

Saliency methods are widely used to interpret neural network predictions...
research
04/18/2021

On the Faithfulness Measurements for Model Interpretations

Recent years have witnessed the emergence of a variety of post-hoc inter...
research
04/06/2021

Robust Semantic Interpretability: Revisiting Concept Activation Vectors

Interpretability methods for image classification assess model trustwort...
research
08/13/2023

Faithful to Whom? Questioning Interpretability Measures in NLP

A common approach to quantifying model interpretability is to calculate ...
research
09/23/2019

On Model Stability as a Function of Random Seed

In this paper, we focus on quantifying model stability as a function of ...

Please sign up or login with your details

Forgot password? Click here to reset