Experience Replay with Likelihood-free Importance Weights

by   Samarth Sinha, et al.
Stanford University

The use of past experiences to accelerate temporal difference (TD) learning of value functions, or experience replay, is a key component in deep reinforcement learning. Prioritization or reweighting of important experiences has shown to improve performance of TD learning algorithms.In this work, we propose to reweight experiences based on their likelihood under the stationary distribution of the current policy. Using the corresponding reweighted TD objective, we implicitly encourage small approximation errors on the value function over frequently encountered states. We use a likelihood-free density ratio estimator over the replay buffer to assign the prioritization weights. We apply the proposed approach empirically on two competitive methods, Soft Actor Critic (SAC) and Twin Delayed Deep Deterministic policy gradient (TD3) – over a suite of OpenAI gym tasks and achieve superior sample complexity compared to other baseline approaches.


page 1

page 2

page 3

page 4


Lucid Dreaming for Experience Replay: Refreshing Past States with the Current Policy

Experience replay (ER) improves the data efficiency of off-policy reinfo...

Improved Soft Actor-Critic: Mixing Prioritized Off-Policy Samples with On-Policy Experience

Soft Actor-Critic (SAC) is an off-policy actor-critic reinforcement lear...

Adaptive Experience Selection for Policy Gradient

Policy gradient reinforcement learning (RL) algorithms have achieved imp...

Improving Experience Replay with Successor Representation

Prioritized experience replay is a reinforcement learning technique show...

Remember and Forget for Experience Replay

Experience replay (ER) is crucial for attaining high data-efficiency in ...

Dynamic Weights in Multi-Objective Deep Reinforcement Learning

Many real-world decision problems are characterized by multiple objectiv...

Code Repositories

Please sign up or login with your details

Forgot password? Click here to reset