H2O: A Benchmark for Visual Human-human Object Handover Analysis

04/23/2021
by   Ruolin Ye, et al.
0

Object handover is a common human collaboration behavior that attracts attention from researchers in Robotics and Cognitive Science. Though visual perception plays an important role in the object handover task, the whole handover process has been specifically explored. In this work, we propose a novel rich-annotated dataset, H2O, for visual analysis of human-human object handovers. The H2O, which contains 18K video clips involving 15 people who hand over 30 objects to each other, is a multi-purpose benchmark. It can support several vision-based tasks, from which, we specifically provide a baseline method, RGPNet, for a less-explored task named Receiver Grasp Prediction. Extensive experiments show that the RGPNet can produce plausible grasps based on the giver's hand-object states in the pre-handover phase. Besides, we also report the hand and object pose errors with existing baselines and show that the dataset can serve as the video demonstrations for robot imitation learning on the handover task. Dataset, model and code will be made public.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

page 7

page 9

research
06/29/2021

O2O-Afford: Annotation-Free Large-Scale Object-Object Affordance Learning

Contrary to the vast literature in modeling, perceiving, and understandi...
research
10/03/2018

Task-Oriented Hand Motion Retargeting for Dexterous Manipulation Imitation

Human hand actions are quite complex, especially when they involve objec...
research
02/01/2022

DexVIP: Learning Dexterous Grasping with Human Hand Pose Priors from Video

Dexterous multi-fingered robotic hands have a formidable action space, y...
research
10/31/2022

A new benchmark for group distribution shifts in hand grasp regression for object manipulation. Can meta-learning raise the bar?

Understanding hand-object pose with computer vision opens the door to ne...
research
05/25/2023

Look Ma, No Hands! Agent-Environment Factorization of Egocentric Videos

The analysis and use of egocentric videos for robotic tasks is made chal...
research
03/29/2022

OakInk: A Large-scale Knowledge Repository for Understanding Hand-Object Interaction

Learning how humans manipulate objects requires machines to acquire know...
research
08/29/2023

Enhancing Robot Learning through Learned Human-Attention Feature Maps

Robust and efficient learning remains a challenging problem in robotics,...

Please sign up or login with your details

Forgot password? Click here to reset