Scheduled Intrinsic Drive: A Hierarchical Take on Intrinsically Motivated Exploration

03/18/2019
by   Jingwei Zhang, et al.
12

Exploration in sparse reward reinforcement learning remains a difficult open challenge. Many state-of-the-art methods use intrinsic motivation to complement the sparse extrinsic reward signal, giving the agent more opportunities to receive feedback during exploration. Most commonly, these signals are added as bonus rewards, which results in the mixture policy faithfully conducting neither exploration nor task fulfillment for an extended amount of time. In this paper, we instead learn separate intrinsic and extrinsic task policies and schedule between these different drives to accelerate exploration and stabilize learning. Moreover, we introduce a new type of intrinsic reward denoted as successor feature control (SFC), which is general and not task-specific. It takes into account statistics over complete trajectories and thus differs from previous methods that only use local information to evaluate intrinsic motivation. We evaluate our proposed scheduled intrinsic drive (SID) agent using three different environments with pure visual inputs: VizDoom, DeepMind Lab and OpenAI Gym classic control from pixels. The results show a greatly improved exploration efficiency with SFC and the hierarchical usage of the intrinsic drives. A video of our experimental results can be found at https://youtu.be/4ZHcBo7006Y.

READ FULL TEXT

page 4

page 6

page 7

page 12

page 13

research
02/27/2020

RIDE: Rewarding Impact-Driven Exploration for Procedurally-Generated Environments

Exploration in sparse reward environments remains one of the key challen...
research
11/28/2022

CIM: Constrained Intrinsic Motivation for Sparse-Reward Continuous Control

Intrinsic motivation is a promising exploration technique for solving re...
research
07/27/2020

Fast active learning for pure exploration in reinforcement learning

Realistic environments often provide agents with very limited feedback. ...
research
10/11/2022

LECO: Learnable Episodic Count for Task-Specific Intrinsic Reward

Episodic count has been widely used to design a simple yet effective int...
research
12/01/2019

Affect-based Intrinsic Rewards for Learning General Representations

Positive affect has been linked to increased interest, curiosity and sat...
research
11/30/2018

Modulated Policy Hierarchies

Solving tasks with sparse rewards is a main challenge in reinforcement l...
research
12/07/2022

Curiosity creates Diversity in Policy Search

When searching for policies, reward-sparse environments often lack suffi...

Please sign up or login with your details

Forgot password? Click here to reset