Revisiting Gaussian mixture critics in off-policy reinforcement learning: a sample-based approach

04/21/2022
by   Bobak Shahriari, et al.
0

Actor-critic algorithms that make use of distributional policy evaluation have frequently been shown to outperform their non-distributional counterparts on many challenging control tasks. Examples of this behavior include the D4PG and DMPO algorithms as compared to DDPG and MPO, respectively [Barth-Maron et al., 2018; Hoffman et al., 2020]. However, both agents rely on the C51 critic for value estimation.One major drawback of the C51 approach is its requirement of prior knowledge about the minimum andmaximum values a policy can attain as well as the number of bins used, which fixes the resolution ofthe distributional estimate. While the DeepMind control suite of tasks utilizes standardized rewards and episode lengths, thus enabling the entire suite to be solved with a single setting of these hyperparameters, this is often not the case. This paper revisits a natural alternative that removes this requirement, namelya mixture of Gaussians, and a simple sample-based loss function to train it in an off-policy regime. We empirically evaluate its performance on a broad range of continuous control tasks and demonstrate that it eliminates the need for these distributional hyperparameters and achieves state-of-the-art performance on a variety of challenging tasks (e.g. the humanoid, dog, quadruped, and manipulator domains). Finallywe provide an implementation in the Acme agent repository.

READ FULL TEXT
research
02/06/2022

Exploration with Multi-Sample Target Values for Distributional Reinforcement Learning

Distributional reinforcement learning (RL) aims to learn a value-network...
research
10/01/2019

Augmenting learning using symmetry in a biologically-inspired domain

Invariances to translation, rotation and other spatial transformations a...
research
06/16/2021

Towards Automatic Actor-Critic Solutions to Continuous Control

Model-free off-policy actor-critic methods are an efficient solution to ...
research
12/14/2021

Conjugated Discrete Distributions for Distributional Reinforcement Learning

In this work we continue to build upon recent advances in reinforcement ...
research
02/28/2020

Self-Tuning Deep Reinforcement Learning

Reinforcement learning (RL) algorithms often require expensive manual or...
research
05/08/2020

Controlling Overestimation Bias with Truncated Mixture of Continuous Distributional Quantile Critics

The overestimation bias is one of the major impediments to accurate off-...
research
04/23/2018

Distributed Distributional Deterministic Policy Gradients

This work adopts the very successful distributional perspective on reinf...

Please sign up or login with your details

Forgot password? Click here to reset