Remember and Forget for Experience Replay
Experience replay (ER) is crucial for attaining high data-efficiency in off-policy deep reinforcement learning (RL). ER entails the recall of experiences obtained in past iterations to compute gradient estimates for the current policy. However, the accuracy of such updates may deteriorate when the policy diverges from past behaviors. Remedies that aim to abate policy changes, such as target networks and hyper-parameter tuning, do not prevent the policy from becoming disconnected from past experiences, possibly undermining the effectiveness of ER. We introduce an algorithm that relies on systematic Remembering and Forgetting for ER (ReF-ER). In ReF-ER the RL agents forget experiences that would be too unlikely with the current policy and constrain policy changes within a trust region of past behaviors in the replay memory. We show that ReF-ER improves the reliability and performance of off-policy RL, both in the deterministic and in the stochastic policy gradients settings. Finally, we complement ReF-ER with a novel off-policy actor-critic algorithm (RACER) for continuous-action control problems. RACER employs a computationally efficient closed-form approximation of on-policy action values and is shown to be highly competitive with state-of-the-art algorithms on benchmark problems, while being robust to large hyper-parameter variations.
READ FULL TEXT