Leveraging recent advances in Pre-Trained Language Models forEye-Tracking Prediction

by   Varun Madhavan, et al.

Cognitively inspired Natural Language Pro-cessing uses human-derived behavioral datalike eye-tracking data, which reflect the seman-tic representations of language in the humanbrain to augment the neural nets to solve arange of tasks spanning syntax and semanticswith the aim of teaching machines about lan-guage processing mechanisms. In this paper,we use the ZuCo 1.0 and ZuCo 2.0 dataset con-taining the eye-gaze features to explore differ-ent linguistic models to directly predict thesegaze features for each word with respect to itssentence. We tried different neural networkmodels with the words as inputs to predict thetargets. And after lots of experimentation andfeature engineering finally devised a novel ar-chitecture consisting of RoBERTa Token Clas-sifier with a dense layer on top for languagemodeling and a stand-alone model consistingof dense layers followed by a transformer layerfor the extra features we engineered. Finally,we took the mean of the outputs of both thesemodels to make the final predictions. We eval-uated the models using mean absolute error(MAE) and the R2 score for each target.


page 2

page 3


TorontoCL at CMCL 2021 Shared Task: RoBERTa with Multi-Stage Fine-Tuning for Eye-Tracking Prediction

Eye movement data during reading is a useful source of information for u...

Multilingual Language Models Predict Human Reading Behavior

We analyze if large language models are able to predict patterns of huma...

Zero Shot Crosslingual Eye-Tracking Data Prediction using Multilingual Transformer Models

Eye tracking data during reading is a useful source of information to un...

Explaining How Transformers Use Context to Build Predictions

Language Generation Models produce words based on the previous context. ...

Please sign up or login with your details

Forgot password? Click here to reset