Reinforcement Learning from Multiple Sensors via Joint Representations

02/10/2023
by   Philipp Becker, et al.
0

In many scenarios, observations from more than one sensor modality are available for reinforcement learning (RL). For example, many agents can perceive their internal state via proprioceptive sensors but must infer the environment's state from high-dimensional observations such as images. For image-based RL, a variety of self-supervised representation learning approaches exist to improve performance and sample complexity. These approaches learn the image representation in isolation. However, including proprioception can help representation learning algorithms to focus on relevant aspects and guide them toward finding better representations. Hence, in this work, we propose using Recurrent State Space Models to fuse all available sensory information into a single consistent representation. We combine reconstruction-based and contrastive approaches for training, which allows using the most appropriate method for each sensor modality. For example, we can use reconstruction for proprioception and a contrastive loss for images. We demonstrate the benefits of utilizing proprioception in learning representations for RL on a large set of experiments. Furthermore, we show that our joint representations significantly improve performance compared to a post hoc combination of image representations and proprioception.

READ FULL TEXT

page 6

page 7

page 8

page 12

page 16

page 17

page 18

page 19

research
01/28/2022

Mask-based Latent Reconstruction for Reinforcement Learning

For deep reinforcement learning (RL) from pixels, learning effective sta...
research
10/11/2021

Learning Temporally-Consistent Representations for Data-Efficient Reinforcement Learning

Deep reinforcement learning (RL) agents that exist in high-dimensional s...
research
04/25/2022

Task-Induced Representation Learning

In this work, we evaluate the effectiveness of representation learning a...
research
06/03/2023

MA2CL:Masked Attentive Contrastive Learning for Multi-Agent Reinforcement Learning

Recent approaches have utilized self-supervised auxiliary tasks as repre...
research
09/14/2021

Comparing Reconstruction- and Contrastive-based Models for Visual Task Planning

Learning state representations enables robotic planning directly from ra...
research
11/03/2020

Representation Matters: Improving Perception and Exploration for Robotics

Projecting high-dimensional environment observations into lower-dimensio...
research
02/10/2021

Improving Model-Based Reinforcement Learning with Internal State Representations through Self-Supervision

Using a model of the environment, reinforcement learning agents can plan...

Please sign up or login with your details

Forgot password? Click here to reset