Comparing Transformers and RNNs on predicting human sentence processing data

by   Danny Merkx, et al.
Radboud Universiteit

Recurrent neural networks (RNNs) have long been an architecture of interest for computational models of human sentence processing. The more recently introduced Transformer architecture has been shown to outperform recurrent neural networks on many natural language processing tasks but little is known about their ability to model human language processing. It has long been thought that human sentence reading involves something akin to recurrence and so RNNs may still have an advantage over the Transformer as a cognitive model. In this paper we train both Transformer and RNN based language models and compare their performance as a model of human sentence processing. We use the trained language models to compute surprisal values for the stimuli used in several reading experiments and use mixed linear modelling to measure how well the surprisal explains measures of human reading effort. Our analysis shows that the Transformers outperform the RNNs as cognitive models in explaining self-paced reading times and N400 strength but not gaze durations from an eye-tracking experiment.


Leveraging Transformers for StarCraft Macromanagement Prediction

Inspired by the recent success of transformers in natural language proce...

Why Does Surprisal From Larger Transformer-Based Language Models Provide a Poorer Fit to Human Reading Times?

This work presents a detailed linguistic analysis into why larger Transf...

Language Models Explain Word Reading Times Better Than Empirical Predictability

Though there is a strong consensus that word length and frequency are th...

Does ChatGPT resemble humans in language use?

Large language models (LLMs) and LLM-driven chatbots such as ChatGPT hav...

Multilingual Language Models Predict Human Reading Behavior

We analyze if large language models are able to predict patterns of huma...

Please sign up or login with your details

Forgot password? Click here to reset