MIDAS: Deep learning human action intention prediction from natural eye movement patterns

01/22/2022
by   Paul Festor, et al.
13

Eye movements have long been studied as a window into the attentional mechanisms of the human brain and made accessible as novelty style human-machine interfaces. However, not everything that we gaze upon, is something we want to interact with; this is known as the Midas Touch problem for gaze interfaces. To overcome the Midas Touch problem, present interfaces tend not to rely on natural gaze cues, but rather use dwell time or gaze gestures. Here we present an entirely data-driven approach to decode human intention for object manipulation tasks based solely on natural gaze cues. We run data collection experiments where 16 participants are given manipulation and inspection tasks to be performed on various objects on a table in front of them. The subjects' eye movements are recorded using wearable eye-trackers allowing the participants to freely move their head and gaze upon the scene. We use our Semantic Fovea, a convolutional neural network model to obtain the objects in the scene and their relation to gaze traces at every frame. We then evaluate the data and examine several ways to model the classification task for intention prediction. Our evaluation shows that intention prediction is not a naive result of the data, but rather relies on non-linear temporal processing of gaze cues. We model the task as a time series classification problem and design a bidirectional Long-Short-Term-Memory (LSTM) network architecture to decode intentions. Our results show that we can decode human intention of motion purely from natural gaze cues and object relative position, with 91.9% accuracy. Our work demonstrates the feasibility of natural gaze as a Zero-UI interface for human-machine interaction, i.e., users will only need to act naturally, and do not need to interact with the interface itself or deviate from their natural eye movement patterns.

READ FULL TEXT

page 1

page 3

page 5

page 6

page 8

research
09/17/2019

What Are You Looking at? Detecting Human Intention in Gaze based Human-Robot Interaction

In gaze based Human-Robot Interaction (HRI), it is important to determin...
research
05/26/2023

Revealing the Hidden Effects of Phishing Emails: An Analysis of Eye and Mouse Movements in Email Sorting Tasks

Users are the last line of defense as phishing emails pass filter mechan...
research
03/04/2021

Gaze-contingent decoding of human navigation intention on an autonomous wheelchair platform

We have pioneered the Where-You-Look-Is Where-You-Go approach to control...
research
01/14/2021

Ensemble of LSTMs and feature selection for human action prediction

As robots are becoming more and more ubiquitous in human environments, i...
research
06/18/2020

Gaze-in-wild: A dataset for studying eye and head coordination in everyday activities

The study of gaze behavior has primarily been constrained to controlled ...
research
08/18/2022

Intention estimation from gaze and motion features for human-robot shared-control object manipulation

Shared control can help in teleoperated object manipulation by assisting...
research
02/17/2023

Build a training interface to install the bat's echolocation skills in humans

Bats use a sophisticated ultrasonic sensing method called echolocation t...

Please sign up or login with your details

Forgot password? Click here to reset