Unsupervised state representation learning with robotic priors: a robustness benchmark

09/15/2017
by   Timothée Lesort, et al.
0

Our understanding of the world depends highly on our capacity to produce intuitive and simplified representations which can be easily used to solve problems. We reproduce this simplification process using a neural network to build a low dimensional state representation of the world from images acquired by a robot. As in Jonschkowski et al. 2015, we learn in an unsupervised way using prior knowledge about the world as loss functions called robotic priors and extend this approach to high dimension richer images to learn a 3D representation of the hand position of a robot from RGB images. We propose a quantitative evaluation of the learned representation using nearest neighbors in the state space that allows to assess its quality and show both the potential and limitations of robotic priors in realistic environments. We augment image size, add distractors and domain randomization, all crucial components to achieve transfer learning to real robots. Finally, we also contribute a new prior to improve the robustness of the representation. The applications of such low dimensional state representation range from easing reinforcement learning (RL) and knowledge transfer across tasks, to facilitating learning from raw data with more efficient and compact high level representations. The results show that the robotic prior approach is able to extract high level representation as the 3D position of an arm and organize it into a compact and coherent space of states in a challenging dataset.

READ FULL TEXT

page 4

page 5

page 6

research
07/29/2020

Low Dimensional State Representation Learning with Reward-shaped Priors

Reinforcement Learning has been able to solve many complicated robotics ...
research
09/25/2018

S-RL Toolbox: Environments, Datasets and Evaluation Metrics for State Representation Learning

State representation learning aims at learning compact representations f...
research
09/25/2017

The Consciousness Prior

A new prior is proposed for representation learning, which can be combin...
research
09/20/2018

Zero-shot Sim-to-Real Transfer with Modular Priors

Current end-to-end Reinforcement Learning (RL) approaches are severely l...
research
09/17/2021

Efficient State Representation Learning for Dynamic Robotic Scenarios

While the rapid progress of deep learning fuels end-to-end reinforcement...
research
07/04/2021

Low Dimensional State Representation Learning with Robotics Priors in Continuous Action Spaces

Autonomous robots require high degrees of cognitive and motoric intellig...
research
05/27/2017

PVEs: Position-Velocity Encoders for Unsupervised Learning of Structured State Representations

We propose position-velocity encoders (PVEs) which learn---without super...

Please sign up or login with your details

Forgot password? Click here to reset