Intrinsically Motivated Goal Exploration Processes with Automatic Curriculum Learning

by   Sébastien Forestier, et al.

Intrinsically motivated spontaneous exploration is a key enabler of autonomous lifelong learning in human children. It allows them to discover and acquire large repertoires of skills through self-generation, self-selection, self-ordering and self-experimentation of learning goals. We present the unsupervised multi-goal reinforcement learning formal framework as well as an algorithmic approach called intrinsically motivated goal exploration processes (IMGEP) to enable similar properties of autonomous learning in machines. The IMGEP algorithmic architecture relies on several principles: 1) self-generation of goals as parameterized reinforcement learning problems; 2) selection of goals based on intrinsic rewards; 3) exploration with parameterized time-bounded policies and fast incremental goal-parameterized policy search; 4) systematic reuse of information acquired when targeting a goal for improving other goals. We present a particularly efficient form of IMGEP that uses a modular representation of goal spaces as well as intrinsic rewards based on learning progress. We show how IMGEPs automatically generate a learning curriculum within an experimental setup where a real humanoid robot can explore multiple spaces of goals with several hundred continuous dimensions. While no particular target goal is provided to the system beforehand, this curriculum allows the discovery of skills of increasing complexity, that act as stepping stone for learning more complex skills (like nested tool use). We show that learning several spaces of diverse problems can be more efficient for learning complex skills than only trying to directly learn these complex skills. We illustrate the computational efficiency of IMGEPs as these robotic experiments use a simple memory-based low-level policy representations and search algorithm, enabling the whole system to learn online and incrementally on a Raspberry Pi 3.


page 10

page 21


Intrinsically Motivated Exploration for Automated Discovery of Patterns in Morphogenetic Systems

Exploration is a cornerstone both for machine learning algorithms and fo...

Unsupervised Learning of Goal Spaces for Intrinsically Motivated Goal Exploration

Intrinsically motivated goal exploration algorithms enable machines to d...

Active Learning of Inverse Models with Intrinsically Motivated Goal Exploration in Robots

We introduce the Self-Adaptive Goal Generation - Robust Intelligent Adap...

CURIOUS: Intrinsically Motivated Multi-Task, Multi-Goal Reinforcement Learning

In open-ended and changing environments, agents face a wide range of pot...

Augmenting Autotelic Agents with Large Language Models

Humans learn to master open-ended repertoires of skills by imagining and...

Progressive growing of self-organized hierarchical representations for exploration

Designing agent that can autonomously discover and learn a diversity of ...

Computational Theories of Curiosity-Driven Learning

What are the functions of curiosity? What are the mechanisms of curiosit...

Please sign up or login with your details

Forgot password? Click here to reset