Lifelong Reinforcement Learning with Modulating Masks

by   Eseoghene Ben-Iwhiwhu, et al.

Lifelong learning aims to create AI systems that continuously and incrementally learn during a lifetime, similar to biological learning. Attempts so far have met problems, including catastrophic forgetting, interference among tasks, and the inability to exploit previous knowledge. While considerable research has focused on learning multiple input distributions, typically in classification, lifelong reinforcement learning (LRL) must also deal with variations in the state and transition distributions, and in the reward functions. Modulating masks, recently developed for classification, are particularly suitable to deal with such a large spectrum of task variations. In this paper, we adapted modulating masks to work with deep LRL, specifically PPO and IMPALA agents. The comparison with LRL baselines in both discrete and continuous RL tasks shows competitive performance. We further investigated the use of a linear combination of previously learned masks to exploit previous knowledge when learning new tasks: not only is learning faster, the algorithm solves tasks that we could not otherwise solve from scratch due to extremely sparse rewards. The results suggest that RL with modulating masks is a promising approach to lifelong learning, to the composition of knowledge to learn increasingly complex tasks, and to knowledge reuse for efficient and faster learning.


page 14

page 25

page 26

page 27

page 28

page 30


Sharing Lifelong Reinforcement Learning Knowledge via Modulating Masks

Lifelong learning agents aim to learn multiple tasks sequentially over a...

Learning Vision-based Robotic Manipulation Tasks Sequentially in Offline Reinforcement Learning Settings

With the rise of deep reinforcement learning (RL) methods, many complex ...

Utilizing Prior Solutions for Reward Shaping and Composition in Entropy-Regularized Reinforcement Learning

In reinforcement learning (RL), the ability to utilize prior knowledge f...

Learning state correspondence of reinforcement learning tasks for knowledge transfer

Deep reinforcement learning has shown an ability to achieve super-human ...

Ternary Feature Masks: continual learning without any forgetting

In this paper, we propose an approach without any forgetting to continua...

Catastrophic Interference in Reinforcement Learning: A Solution Based on Context Division and Knowledge Distillation

The powerful learning ability of deep neural networks enables reinforcem...

Adding New Tasks to a Single Network with Weight Trasformations using Binary Masks

Visual recognition algorithms are required today to exhibit adaptive abi...

Please sign up or login with your details

Forgot password? Click here to reset