Noise, overestimation and exploration in Deep Reinforcement Learning

06/25/2020
by   Rafael Stekolshchik, et al.
0

We will discuss some statistical noise related phenomena, that were investigated by different authors in the framework of Deep Reinforcement Learning algorithms. The following algorithms are touched: DQN, Double DQN, DDPG, TD3, Hill-Climbing. Firstly, we consider overestimation, that is the harmful property resulting from noise. Then we deal with noise used for exploration, this is the useful noise. We discuss setting the noise parameter in TD3 for typical PyBullet environments associated with articulate bodies such as HopperBulletEnv and Walker2DBulletEnv. In the appendix, in relation with the Hill-Climbing algorithm, we will look at one more example of noise: adaptive noise.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro