Multiperspective Teaching of Unknown Objects via Shared-gaze-based Multimodal Human-Robot Interaction

by   Daniel Weber, et al.

For successful deployment of robots in multifaceted situations, an understanding of the robot for its environment is indispensable. With advancing performance of state-of-the-art object detectors, the capability of robots to detect objects within their interaction domain is also enhancing. However, it binds the robot to a few trained classes and prevents it from adapting to unfamiliar surroundings beyond predefined scenarios. In such scenarios, humans could assist robots amidst the overwhelming number of interaction entities and impart the requisite expertise by acting as teachers. We propose a novel pipeline that effectively harnesses human gaze and augmented reality in a human-robot collaboration context to teach a robot novel objects in its surrounding environment. By intertwining gaze (to guide the robot's attention to an object of interest) with augmented reality (to convey the respective class information) we enable the robot to quickly acquire a significant amount of automatically labeled training data on its own. Training in a transfer learning fashion, we demonstrate the robot's capability to detect recently learned objects and evaluate the influence of different machine learning models and learning procedures as well as the amount of training data involved. Our multimodal approach proves to be an efficient and natural way to teach the robot novel objects based on a few instances and allows it to detect classes for which no training dataset is available. In addition, we make our dataset publicly available to the research community, which consists of RGB and depth data, intrinsic and extrinsic camera parameters, along with regions of interest.


page 6

page 7

page 8

page 9


ARDIE: AR, Dialogue, and Eye Gaze Policies for Human-Robot Collaboration

Human-robot collaboration (HRC) has become increasingly relevant in indu...

SENSAR: A Visual Tool for Intelligent Robots for Collaborative Human-Robot Interaction

Establishing common ground between an intelligent robot and a human requ...

iCub Detecting Gazed Objects: A Pipeline Estimating Human Attention

This paper explores the role of eye gaze in human-robot interactions and...

Gaze-based Object Detection in the Wild

In human-robot collaboration, one challenging task is to teach a robot n...

Learning Topometric Semantic Maps from Occupancy Grids

Today's mobile robots are expected to operate in complex environments th...

BOSS: A Benchmark for Human Belief Prediction in Object-context Scenarios

Humans with an average level of social cognition can infer the beliefs o...

A Quality Diversity Approach to Automatically Generating Human-Robot Interaction Scenarios in Shared Autonomy

The growth of scale and complexity of interactions between humans and ro...

Please sign up or login with your details

Forgot password? Click here to reset