Bayesian Imitation Learning for End-to-End Mobile Manipulation

02/15/2022
by   Yuqing Du, et al.
11

In this work we investigate and demonstrate benefits of a Bayesian approach to imitation learning from multiple sensor inputs, as applied to the task of opening office doors with a mobile manipulator. Augmenting policies with additional sensor inputs, such as RGB + depth cameras, is a straightforward approach to improving robot perception capabilities, especially for tasks that may favor different sensors in different situations. As we scale multi-sensor robotic learning to unstructured real-world settings (e.g. offices, homes) and more complex robot behaviors, we also increase reliance on simulators for cost, efficiency, and safety. Consequently, the sim-to-real gap across multiple sensor modalities also increases, making simulated validation more difficult. We show that using the Variational Information Bottleneck (Alemi et al., 2016) to regularize convolutional neural networks improves generalization to held-out domains and reduces the sim-to-real gap in a sensor-agnostic manner. As a side effect, the learned embeddings also provide useful estimates of model uncertainty for each sensor. We demonstrate that our method is able to help close the sim-to-real gap and successfully fuse RGB and depth modalities based on understanding of the situational uncertainty of each sensor. In a real-world office environment, we achieve 96 +16

READ FULL TEXT

page 1

page 7

page 12

page 13

page 14

page 15

research
01/18/2023

NeRF in the Palm of Your Hand: Corrective Augmentation for Robotics via Novel-View Synthesis

Expert demonstrations are a rich source of supervision for training visu...
research
07/07/2023

Decomposing the Generalization Gap in Imitation Learning for Visual Robotic Manipulation

What makes generalization hard for imitation learning in visual robotic ...
research
10/06/2017

Socially-compliant Navigation through Raw Depth Inputs with Generative Adversarial Imitation Learning

We present an approach for mobile robots to learn to navigate in pedestr...
research
07/22/2020

Understanding Multi-Modal Perception Using Behavioral Cloning for Peg-In-a-Hole Insertion Tasks

One of the main challenges in peg-in-a-hole (PiH) insertion tasks is in ...
research
02/03/2022

Practical Imitation Learning in the Real World via Task Consistency Loss

Recent work in visual end-to-end learning for robotics has shown the pro...
research
02/08/2023

Asking for Help: Failure Prediction in Behavioral Cloning through Value Approximation

Recent progress in end-to-end Imitation Learning approaches has shown pr...
research
09/16/2019

Learning Controls Using Cross-Modal Representations: Bridging Simulation and Reality for Drone Racing

Machines are a long way from robustly solving open-world perception-cont...

Please sign up or login with your details

Forgot password? Click here to reset