Fast and Slow Learning of Recurrent Independent Mechanisms

by   Kanika Madan, et al.

Decomposing knowledge into interchangeable pieces promises a generalization advantage when there are changes in distribution. A learning agent interacting with its environment is likely to be faced with situations requiring novel combinations of existing pieces of knowledge. We hypothesize that such a decomposition of knowledge is particularly relevant for being able to generalize in a systematic manner to out-of-distribution changes. To study these ideas, we propose a particular training framework in which we assume that the pieces of knowledge an agent needs and its reward function are stationary and can be re-used across tasks. An attention mechanism dynamically selects which modules can be adapted to the current task, and the parameters of the selected modules are allowed to change quickly as the learner is confronted with variations in what it experiences, while the parameters of the attention mechanisms act as stable, slowly changing, meta-parameters. We focus on pieces of knowledge captured by an ensemble of modules sparsely communicating with each other via a bottleneck of attention. We find that meta-learning the modular aspects of the proposed system greatly helps in achieving faster adaptation in a reinforcement learning setup involving navigation in a partially observed grid world with image-level input. We also find that reversing the role of parameters and meta-parameters does not work nearly as well, suggesting a particular role for fast adaptation of the dynamically selected modules.


page 15

page 17

page 18


Robotic Control Using Model Based Meta Adaption

In machine learning, meta-learning methods aim for fast adaptability to ...

Meta-learning curiosity algorithms

We hypothesize that curiosity is a mechanism found by evolution that enc...

Recurrent Independent Mechanisms

Learning modular structures which reflect the dynamics of the environmen...

Algorithm Design for Online Meta-Learning with Task Boundary Detection

Online meta-learning has recently emerged as a marriage between batch me...

A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms

We propose to meta-learn causal structures based on how fast a learner a...

Learning Neural Causal Models from Unknown Interventions

Meta-learning over a set of distributions can be interpreted as learning...

Please sign up or login with your details

Forgot password? Click here to reset