Discrimination of Radiologists Utilizing Eye-Tracking Technology and Machine Learning: A Case Study

by   Stanford Martinez, et al.

Perception-related errors comprise most diagnostic mistakes in radiology. To mitigate this problem, radiologists employ personalized and high-dimensional visual search strategies, otherwise known as search patterns. Qualitative descriptions of these search patterns, which involve the physician verbalizing or annotating the order he/she analyzes the image, can be unreliable due to discrepancies in what is reported versus the actual visual patterns. This discrepancy can interfere with quality improvement interventions and negatively impact patient care. This study presents a novel discretized feature encoding based on spatiotemporal binning of fixation data for efficient geometric alignment and temporal ordering of eye movement when reading chest X-rays. The encoded features of the eye-fixation data are employed by machine learning classifiers to discriminate between faculty and trainee radiologists. We include a clinical trial case study utilizing the Area Under the Curve (AUC), Accuracy, F1, Sensitivity, and Specificity metrics for class separability to evaluate the discriminability between the two subjects in regard to their level of experience. We then compare the classification performance to state-of-the-art methodologies. A repeatability experiment using a separate dataset, experimental protocol, and eye tracker was also performed using eight subjects to evaluate the robustness of the proposed approach. The numerical results from both experiments demonstrate that classifiers employing the proposed feature encoding methods outperform the current state-of-the-art in differentiating between radiologists in terms of experience level. This signifies the potential impact of the proposed method for identifying radiologists' level of expertise and those who would benefit from additional training.


page 6

page 9

page 11

page 13

page 16

page 18

page 19

page 20


GazeBase: A Large-Scale, Multi-Stimulus, Longitudinal Eye Movement Dataset

This manuscript presents GazeBase, a large-scale longitudinal dataset co...

Hierarchical HMM for Eye Movement Classification

In this work, we tackle the problem of ternary eye movement classificati...

Differentiating Surgeon Expertise Solely by Eye Movement Features

Developments in computer science in recent years are moving into hospita...

A study on the use of eye tracking to adapt gameplay and procedural content generation in first-person shooter games

This paper studies the use of eye tracking in a First-Person Shooter (FP...

Improving Intention Detection in Single-Trial Classification through Fusion of EEG and Eye-tracker Data

Intention decoding is an indispensable procedure in hands-free human-com...

AI-based software for lung nodule detection in chest X-rays – Time for a second reader approach?

Objectives: To compare artificial intelligence (AI) as a second reader i...

Please sign up or login with your details

Forgot password? Click here to reset