Latent Exploration for Reinforcement Learning

by   Alberto Silvio Chiappa, et al.

In Reinforcement Learning, agents learn policies by exploring and interacting with the environment. Due to the curse of dimensionality, learning policies that map high-dimensional sensory input to motor output is particularly challenging. During training, state of the art methods (SAC, PPO, etc.) explore the environment by perturbing the actuation with independent Gaussian noise. While this unstructured exploration has proven successful in numerous tasks, it ought to be suboptimal for overactuated systems. When multiple actuators, such as motors or muscles, drive behavior, uncorrelated perturbations risk diminishing each other's effect, or modifying the behavior in a task-irrelevant way. While solutions to introduce time correlation across action perturbations exist, introducing correlation across actuators has been largely ignored. Here, we propose LATent TIme-Correlated Exploration (Lattice), a method to inject temporally-correlated noise into the latent state of the policy network, which can be seamlessly integrated with on- and off-policy algorithms. We demonstrate that the noisy actions generated by perturbing the network's activations can be modeled as a multivariate Gaussian distribution with a full covariance matrix. In the PyBullet locomotion tasks, Lattice-SAC achieves state of the art results, and reaches 18 Humanoid environment. In the musculoskeletal control environments of MyoSuite, Lattice-PPO achieves higher reward in most reaching and object manipulation tasks, while also finding more energy-efficient policies with reductions of 20-60 time and actuator space for complex motor control tasks.


page 4

page 9


Information Maximizing Exploration with a Latent Dynamics Model

All reinforcement learning algorithms must handle the trade-off between ...

Active Reinforcement Learning under Limited Visual Observability

In this work, we investigate Active Reinforcement Learning (Active-RL), ...

Reinforcement Learning Control of a Forestry Crane Manipulator

Forestry machines are heavy vehicles performing complex manipulation tas...

Learning Robotic Manipulation Tasks through Visual Planning

Multi-step manipulation tasks in unstructured environments are extremely...

Being curious about the answers to questions: novelty search with learned attention

We investigate the use of attentional neural network layers in order to ...

End-Effect Exploration Drive for Effective Motor Learning

End-effect drives are proposed here as an effective way to implement goa...

Temporally Layered Architecture for Efficient Continuous Control

We present a temporally layered architecture (TLA) for temporally adapti...

Please sign up or login with your details

Forgot password? Click here to reset