Locally Aggregated Feature Attribution on Natural Language Model Understanding

04/22/2022
by   Sheng Zhang, et al.
0

With the growing popularity of deep-learning models, model understanding becomes more important. Much effort has been devoted to demystify deep neural networks for better interpretability. Some feature attribution methods have shown promising results in computer vision, especially the gradient-based methods where effectively smoothing the gradients with reference data is key to a robust and faithful result. However, direct application of these gradient-based methods to NLP tasks is not trivial due to the fact that the input consists of discrete tokens and the "reference" tokens are not explicitly defined. In this work, we propose Locally Aggregated Feature Attribution (LAFA), a novel gradient-based feature attribution method for NLP models. Instead of relying on obscure reference tokens, it smooths gradients by aggregating similar reference texts derived from language model embeddings. For evaluation purpose, we also design experiments on different NLP tasks including Entity Recognition and Sentiment Analysis on public datasets as well as key feature detection on a constructed Amazon catalogue dataset. The superior performance of the proposed method is demonstrated through experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/12/2021

A-FMI: Learning Attributions from Deep Networks via Feature Map Importance

Gradient-based attribution methods can aid in the understanding of convo...
research
07/15/2022

Anomalous behaviour in loss-gradient based interpretability methods

Loss-gradients are used to interpret the decision making process of deep...
research
10/15/2021

Evaluating the Faithfulness of Importance Measures in NLP by Recursively Masking Allegedly Important Tokens and Retraining

To explain NLP models, many methods inform which inputs tokens are impor...
research
05/12/2023

Asymmetric feature interaction for interpreting model predictions

In natural language processing (NLP), deep neural networks (DNNs) could ...
research
11/08/2016

Gradients of Counterfactuals

Gradients have been used to quantify feature importance in machine learn...
research
01/26/2022

IMACS: Image Model Attribution Comparison Summaries

Developing a suitable Deep Neural Network (DNN) often requires significa...
research
07/25/2023

Analyzing Chain-of-Thought Prompting in Large Language Models via Gradient-based Feature Attributions

Chain-of-thought (CoT) prompting has been shown to empirically improve t...

Please sign up or login with your details

Forgot password? Click here to reset