Intention estimation from gaze and motion features for human-robot shared-control object manipulation

by   Anna Belardinelli, et al.

Shared control can help in teleoperated object manipulation by assisting with the execution of the user's intention. To this end, robust and prompt intention estimation is needed, which relies on behavioral observations. Here, an intention estimation framework is presented, which uses natural gaze and motion features to predict the current action and the target object. The system is trained and tested in a simulated environment with pick and place sequences produced in a relatively cluttered scene and with both hands, with possible hand-over to the other hand. Validation is conducted across different users and hands, achieving good accuracy and earliness of prediction. An analysis of the predictive power of single features shows the predominance of the grasping trigger and the gaze features in the early identification of the current action. In the current framework, the same probabilistic model can be used for the two hands working in parallel and independently, while a rule-based model is proposed to identify the resulting bimanual action. Finally, limitations and perspectives of this approach to more complex, full-bimanual manipulations are discussed.


page 1

page 2

page 5


Gaze-based intention estimation: principles, methodologies, and applications in HRI

Intention prediction has become a relevant field of research in Human-Ma...

Early Estimation of User's Intention of Tele-Operation Using Object Affordance and Hand Motion in a Dual First-Person Vision

This paper describes a method of estimating the intention of a user's mo...

What Are You Looking at? Detecting Human Intention in Gaze based Human-Robot Interaction

In gaze based Human-Robot Interaction (HRI), it is important to determin...

MIDAS: Deep learning human action intention prediction from natural eye movement patterns

Eye movements have long been studied as a window into the attentional me...

Fast human motion prediction for human-robot collaboration with wearable interfaces

In this paper, we aim at improving human motion prediction during human-...

Towards Intention Prediction for Handheld Robots: a Case of Simulated Block Copying

Within this work, we explore intention inference for user actions in the...

Gaze-based, Context-aware Robotic System for Assisted Reaching and Grasping

Assistive robotic systems endeavour to support those with movement disab...

Please sign up or login with your details

Forgot password? Click here to reset