SLOT-V: Supervised Learning of Observer Models for Legible Robot Motion Planning in Manipulation

10/04/2022
by   Sebastian Wallkotter, et al.
0

We present SLOT-V, a novel supervised learning framework that learns observer models (human preferences) from robot motion trajectories in a legibility context. Legibility measures how easily a (human) observer can infer the robot's goal from a robot motion trajectory. When generating such trajectories, existing planners often rely on an observer model that estimates the quality of trajectory candidates. These observer models are frequently hand-crafted or, occasionally, learned from demonstrations. Here, we propose to learn them in a supervised manner using the same data format that is frequently used during the evaluation of aforementioned approaches. We then demonstrate the generality of SLOT-V using a Franka Emika in a simulated manipulation environment. For this, we show that it can learn to closely predict various hand-crafted observer models, i.e., that SLOT-V's hypothesis space encompasses existing handcrafted models. Next, we showcase SLOT-V's ability to generalize by showing that a trained model continues to perform well in environments with unseen goal configurations and/or goal counts. Finally, we benchmark SLOT-V's sample efficiency (and performance) against an existing IRL approach and show that SLOT-V learns better observer models with less data. Combined, these results suggest that SLOT-V can learn viable observer models. Better observer models imply more legible trajectories, which may - in turn - lead to better and more transparent human-robot interaction.

READ FULL TEXT

page 1

page 4

research
08/22/2021

From Agile Ground to Aerial Navigation: Learning from Learned Hallucination

This paper presents a self-supervised Learning from Learned Hallucinatio...
research
07/11/2019

Learning Safe Unlabeled Multi-Robot Planning with Motion Constraints

In this paper, we present a learning approach to goal assignment and tra...
research
10/14/2019

Imitating by generating: deep generative models for imitation of interactive tasks

To coordinate actions with an interaction partner requires a constant ex...
research
05/25/2023

PRIMP: PRobabilistically-Informed Motion Primitives for Efficient Affordance Learning from Demonstration

This paper proposes a learning-from-demonstration method using probabili...
research
05/19/2023

Fast Anticipatory Motion Planning for Close-Proximity Human-Robot Interaction

Effective close-proximity human-robot interaction (CP-HRI) requires robo...
research
07/18/2020

Slot Contrastive Networks: A Contrastive Approach for Representing Objects

Unsupervised extraction of objects from low-level visual data is an impo...

Please sign up or login with your details

Forgot password? Click here to reset