Relative Importance in Sentence Processing

by   Nora Hollenstein, et al.

Determining the relative importance of the elements in a sentence is a key factor for effortless natural language understanding. For human language processing, we can approximate patterns of relative importance by measuring reading fixations using eye-tracking technology. In neural language models, gradient-based saliency methods indicate the relative importance of a token for the target objective. In this work, we compare patterns of relative importance in English language processing by humans and models and analyze the underlying linguistic patterns. We find that human processing patterns in English correlate strongly with saliency-based importance in language models and not with attention-based importance. Our results indicate that saliency could be a cognitively more plausible metric for interpreting neural language models. The code is available on GitHub:


Multilingual Language Models Predict Human Reading Behavior

We analyze if large language models are able to predict patterns of huma...

Emergence of a phonological bias in ChatGPT

Current large language models, such as OpenAI's ChatGPT, have captured t...

Analyzing Chain-of-Thought Prompting in Large Language Models via Gradient-based Feature Attributions

Chain-of-thought (CoT) prompting has been shown to empirically improve t...

Large Language Models Are Partially Primed in Pronoun Interpretation

While a large body of literature suggests that large language models (LL...

Evaluating Saliency Methods for Neural Language Models

Saliency methods are widely used to interpret neural network predictions...

Learning Visual Importance for Graphic Designs and Data Visualizations

Knowing where people look and click on visual designs can provide clues ...

Please sign up or login with your details

Forgot password? Click here to reset