Directly Estimating the Variance of the λ-Return Using Temporal-Difference Methods

by   Craig Sherstan, et al.

This paper investigates estimating the variance of a temporal-difference learning agent's update target. Most reinforcement learning methods use an estimate of the value function, which captures how good it is for the agent to be in a particular state and is mathematically expressed as the expected sum of discounted future rewards (called the return). These values can be straightforwardly estimated by averaging batches of returns using Monte Carlo methods. However, if we wish to update the agent's value estimates during learning--before terminal outcomes are observed--we must use a different estimation target called the λ-return, which truncates the return with the agent's own estimate of the value function. Temporal difference learning methods estimate the expected λ-return for each state, allowing these methods to update online and incrementally, and in most cases achieve better generalization error and faster learning than Monte Carlo methods. Naturally one could attempt to estimate higher-order moments of the λ-return. This paper is about estimating the variance of the λ-return. Prior work has shown that given estimates of the variance of the λ-return, learning systems can be constructed to (1) mitigate risk in action selection, and (2) automatically adapt the parameters of the learning process itself to improve performance. Unfortunately, existing methods for estimating the variance of the λ-return are complex and not well understood empirically. We contribute a method for estimating the variance of the λ-return directly using policy evaluation methods from reinforcement learning. Our approach is significantly simpler than prior methods that independently estimate the second moment of the λ-return. Empirically our new approach behaves at least as well as existing approaches, but is generally more robust.


page 1

page 2

page 3

page 4


Incrementally Learning Functions of the Return

Temporal difference methods enable efficient estimation of value functio...

Leveraging the Variance of Return Sequences for Exploration Policy

This paper introduces a method for constructing an upper bound for explo...

A Generalized Bootstrap Target for Value-Learning, Efficiently Combining Value and Feature Predictions

Estimating value functions is a core component of reinforcement learning...

The Statistical Benefits of Quantile Temporal-Difference Learning for Value Estimation

We study the problem of temporal-difference-based policy evaluation in r...

Direct Advantage Estimation

Credit assignment is one of the central problems in reinforcement learni...

Adaptive Tree Backup Algorithms for Temporal-Difference Reinforcement Learning

Q(σ) is a recently proposed temporal-difference learning method that int...

Preferential Temporal Difference Learning

Temporal-Difference (TD) learning is a general and very useful tool for ...

Please sign up or login with your details

Forgot password? Click here to reset