Self-Supervised Sim-to-Real Adaptation for Visual Robotic Manipulation

10/21/2019
by   Rae Jeong, et al.
0

Collecting and automatically obtaining reward signals from real robotic visual data for the purposes of training reinforcement learning algorithms can be quite challenging and time-consuming. Methods for utilizing unlabeled data can have a huge potential to further accelerate robotic learning. We consider here the problem of performing manipulation tasks from pixels. In such tasks, choosing an appropriate state representation is crucial for planning and control. This is even more relevant with real images where noise, occlusions and resolution affect the accuracy and reliability of state estimation. In this work, we learn a latent state representation implicitly with deep reinforcement learning in simulation, and then adapt it to the real domain using unlabeled real robot data. We propose to do so by optimizing sequence-based self supervised objectives. These exploit the temporal nature of robot experience, and can be common in both the simulated and real domains, without assuming any alignment of underlying states in simulated and unlabeled real images. We propose Contrastive Forward Dynamics loss, which combines dynamics model learning with time-contrastive techniques. The learned state representation that results from our methods can be used to robustly solve a manipulation task in simulation and to successfully transfer the learned skill on a real system. We demonstrate the effectiveness of our approaches by training a vision-based reinforcement learning agent for cube stacking. Agents trained with our method, using only 5 hours of unlabeled real robot data for adaptation, shows a clear improvement over domain randomization, and standard visual domain adaptation techniques for sim-to-real transfer.

READ FULL TEXT

page 1

page 3

research
02/12/2022

End-to-end Reinforcement Learning of Robotic Manipulation with Robust Keypoints Representation

We present an end-to-end Reinforcement Learning(RL) framework for roboti...
research
03/20/2021

Unsupervised Feature Learning for Manipulation with Contrastive Domain Randomization

Robotic tasks such as manipulation with visual inputs require image feat...
research
06/20/2023

RoboCat: A Self-Improving Foundation Agent for Robotic Manipulation

The ability to leverage heterogeneous robotic experience from different ...
research
12/03/2018

Visual Foresight: Model-Based Deep Reinforcement Learning for Vision-Based Robotic Control

Deep reinforcement learning (RL) algorithms can learn complex robotic sk...
research
02/04/2022

Malleable Agents for Re-Configurable Robotic Manipulators

Re-configurable robots potentially have more utility and flexibility for...
research
03/11/2022

Graph Neural Networks for Relational Inductive Bias in Vision-based Deep Reinforcement Learning of Robot Control

State-of-the-art reinforcement learning algorithms predominantly learn a...
research
05/10/2023

Inclusive FinTech Lending via Contrastive Learning and Domain Adaptation

FinTech lending (e.g., micro-lending) has played a significant role in f...

Please sign up or login with your details

Forgot password? Click here to reset