Imitation Learning of Robot Policies by Combining Language, Vision and Demonstration

11/26/2019
by   Simon Stepputtis, et al.
0

In this work we propose a novel end-to-end imitation learning approach which combines natural language, vision, and motion information to produce an abstract representation of a task, which in turn is used to synthesize specific motion controllers at run-time. This multimodal approach enables generalization to a wide variety of environmental conditions and allows an end-user to direct a robot policy through verbal communication. We empirically validate our approach with an extensive set of simulations and show that it achieves a high task success rate over a variety of conditions while remaining amenable to probabilistic interpretability.

READ FULL TEXT
research
02/04/2022

BC-Z: Zero-Shot Task Generalization with Robotic Imitation Learning

In this paper, we study the problem of enabling a vision-based robotic m...
research
03/05/2020

A Geometric Perspective on Visual Imitation Learning

We consider the problem of visual imitation learning without human super...
research
10/22/2020

Language-Conditioned Imitation Learning for Robot Manipulation Tasks

Imitation learning is a popular approach for teaching motor skills to ro...
research
06/28/2018

End-to-End Deep Imitation Learning: Robot Soccer Case Study

In imitation learning, behavior learning is generally done using the fea...
research
11/21/2018

Early Fusion for Goal Directed Robotic Vision

Increasingly, perceptual systems are being codified as strict pipelines ...
research
11/13/2019

Motion Reasoning for Goal-Based Imitation Learning

We address goal-based imitation learning, where the aim is to output the...
research
07/07/2023

Decomposing the Generalization Gap in Imitation Learning for Visual Robotic Manipulation

What makes generalization hard for imitation learning in visual robotic ...

Please sign up or login with your details

Forgot password? Click here to reset