Multi-resolution Interpretation and Diagnostics Tool for Natural Language Classifiers

03/06/2023
by   Peyman Jalali, et al.
0

Developing explainability methods for Natural Language Processing (NLP) models is a challenging task, for two main reasons. First, the high dimensionality of the data (large number of tokens) results in low coverage and in turn small contributions for the top tokens, compared to the overall model performance. Second, owing to their textual nature, the input variables, after appropriate transformations, are effectively binary (presence or absence of a token in an observation), making the input-output relationship difficult to understand. Common NLP interpretation techniques do not have flexibility in resolution, because they usually operate at word-level and provide fully local (message level) or fully global (over all messages) summaries. The goal of this paper is to create more flexible model explainability summaries by segments of observation or clusters of words that are semantically related to each other. In addition, we introduce a root cause analysis method for NLP models, by analyzing representative False Positive and False Negative examples from different segments. At the end, we illustrate, using a Yelp review data set with three segments (Restaurant, Hotel, and Beauty), that exploiting group/cluster structures in words and/or messages can aid in the interpretation of decisions made by NLP models and can be utilized to assess the model's sensitivity or bias towards gender, syntax, and word meanings.

READ FULL TEXT

page 6

page 7

page 8

page 10

page 11

page 12

page 13

research
06/14/2021

Model Explainability in Deep Learning Based Natural Language Processing

Machine learning (ML) model explainability has received growing attentio...
research
10/27/2020

Interpretation of NLP models through input marginalization

To demystify the "black box" property of deep neural networks for natura...
research
03/12/2022

Low-Rank Softmax Can Have Unargmaxable Classes in Theory but Rarely in Practice

Classifiers in natural language processing (NLP) often have a large numb...
research
10/14/2020

Geometry matters: Exploring language examples at the decision boundary

A growing body of recent evidence has highlighted the limitations of nat...
research
08/11/2021

Perturbing Inputs for Fragile Interpretations in Deep Natural Language Processing

Interpretability methods like Integrated Gradient and LIME are popular c...
research
04/20/2021

Robustness Tests of NLP Machine Learning Models: Search and Semantically Replace

This paper proposes a strategy to assess the robustness of different mac...
research
08/20/2019

Universal Adversarial Triggers for Attacking and Analyzing NLP

Adversarial examples highlight model vulnerabilities and are useful for ...

Please sign up or login with your details

Forgot password? Click here to reset