What Can I Do Here? Learning New Skills by Imagining Visual Affordances

06/01/2021
by   Alexander Khazatsky, et al.
0

A generalist robot equipped with learned skills must be able to perform many tasks in many different environments. However, zero-shot generalization to new settings is not always possible. When the robot encounters a new environment or object, it may need to finetune some of its previously learned skills to accommodate this change. But crucially, previously learned behaviors and models should still be suitable to accelerate this relearning. In this paper, we aim to study how generative models of possible outcomes can allow a robot to learn visual representations of affordances, so that the robot can sample potentially possible outcomes in new situations, and then further train its policy to achieve those outcomes. In effect, prior data is used to learn what kinds of outcomes may be possible, such that when the robot encounters an unfamiliar setting, it can sample potential outcomes from its model, attempt to reach them, and thereby update both its skills and its outcome model. This approach, visuomotor affordance learning (VAL), can be used to train goal-conditioned policies that operate on raw image inputs, and can rapidly learn to manipulate new objects via our proposed affordance-directed exploration scheme. We show that VAL can utilize prior data to solve real-world tasks such drawer opening, grasping, and placing objects in new scenes with only five minutes of online experience in the new scene.

READ FULL TEXT

page 1

page 4

page 6

page 9

research
10/23/2019

Contextual Imagined Goals for Self-Supervised Robotic Learning

While reinforcement learning provides an appealing formalism for learnin...
research
10/07/2020

Learning Arbitrary-Goal Fabric Folding with One Hour of Real Robot Experience

Manipulating deformable objects, such as fabric, is a long standing prob...
research
04/15/2021

Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills

We consider the problem of learning useful robotic skills from previousl...
research
10/23/2019

Learning Deep Parameterized Skills from Demonstration for Re-targetable Visuomotor Control

Robots need to learn skills that can not only generalize across similar ...
research
04/18/2018

Active choice of teachers, learning strategies and goals for a socially guided intrinsic motivation learner

We present an active learning architecture that allows a robot to active...
research
04/12/2021

Augmented World Models Facilitate Zero-Shot Dynamics Generalization From a Single Offline Environment

Reinforcement learning from large-scale offline datasets provides us wit...
research
12/26/2020

One-Shot Object Localization Using Learnt Visual Cues via Siamese Networks

A robot that can operate in novel and unstructured environments must be ...

Please sign up or login with your details

Forgot password? Click here to reset