An Investigation of the Bias-Variance Tradeoff in Meta-Gradients

09/22/2022
by   Risto Vuorio, et al.
University of Oxford
0

Meta-gradients provide a general approach for optimizing the meta-parameters of reinforcement learning (RL) algorithms. Estimation of meta-gradients is central to the performance of these meta-algorithms, and has been studied in the setting of MAML-style short-horizon meta-RL problems. In this context, prior work has investigated the estimation of the Hessian of the RL objective, as well as tackling the problem of credit assignment to pre-adaptation behavior by making a sampling correction. However, we show that Hessian estimation, implemented for example by DiCE and its variants, always adds bias and can also add variance to meta-gradient estimation. Meanwhile, meta-gradient estimation has been studied less in the important long-horizon setting, where backpropagation through the full inner optimization trajectories is not feasible. We study the bias and variance tradeoff arising from truncated backpropagation and sampling correction, and additionally compare to evolution strategies, which is a recently popular alternative strategy to long-horizon meta-learning. While prior work implicitly chooses points in this bias-variance space, we disentangle the sources of bias and variance and present an empirical study that relates existing estimators to each other.

READ FULL TEXT

page 1

page 2

page 3

page 4

12/14/2021

Biased Gradient Estimate with Drastic Variance Reduction for Meta Reinforcement Learning

Despite the empirical success of meta reinforcement learning (meta-RL), ...
04/25/2019

Faster and More Accurate Learning with Meta Trace Adaptation

Learning speed and accuracy are of universal interest for reinforcement ...
10/30/2021

One Step at a Time: Pros and Cons of Multi-Step Meta-Gradient Reinforcement Learning

Self-tuning algorithms that adapt the learning process online encourage ...
03/06/2018

Understanding Short-Horizon Bias in Stochastic Meta-Optimization

Careful tuning of the learning rate, or even schedules thereof, can be c...
04/21/2023

Low-Variance Gradient Estimation in Unrolled Computation Graphs with ES-Single

We propose an evolution strategies-based algorithm for estimating gradie...
09/25/2019

ES-MAML: Simple Hessian-Free Meta Learning

We introduce ES-MAML, a new framework for solving the model agnostic met...
12/27/2021

Unbiased Gradient Estimation in Unrolled Computation Graphs with Persistent Evolution Strategies

Unrolled computation graphs arise in many scenarios, including training ...

Please sign up or login with your details

Forgot password? Click here to reset