Return-based Scaling: Yet Another Normalisation Trick for Deep RL

by   Tom Schaul, et al.

Scaling issues are mundane yet irritating for practitioners of reinforcement learning. Error scales vary across domains, tasks, and stages of learning; sometimes by many orders of magnitude. This can be detrimental to learning speed and stability, create interference between learning tasks, and necessitate substantial tuning. We revisit this topic for agents based on temporal-difference learning, sketch out some desiderata and investigate scenarios where simple fixes fall short. The mechanism we propose requires neither tuning, clipping, nor adaptation. We validate its effectiveness and robustness on the suite of Atari games. Our scaling method turns out to be particularly helpful at mitigating interference, when training a shared neural network on multiple targets that differ in reward scale or discounting.


page 1

page 2

page 3

page 4


ANS: Adaptive Network Scaling for Deep Rectifier Reinforcement Learning Models

This work provides a thorough study on how reward scaling can affect per...

Measuring and Mitigating Interference in Reinforcement Learning

Catastrophic interference is common in many network-based learning syste...

Deep-Neural-Network based Fall-back Mechanism in Interference-Aware Receiver Design

In this letter, we consider designing a fall-back mechanism in an interf...

Learning values across many orders of magnitude

Most learning algorithms are not invariant to the scale of the function ...

Learning Dynamics and Generalization in Reinforcement Learning

Solving a reinforcement learning (RL) problem poses two competing challe...

Adapting to Reward Progressivity via Spectral Reinforcement Learning

In this paper we consider reinforcement learning tasks with progressive ...

Please sign up or login with your details

Forgot password? Click here to reset