Inferred successor maps for better transfer learning

06/18/2019
by   Tamas J. Madarasz, et al.
0

Humans and animals show remarkable flexibility in adjusting their behaviour when their goals, or rewards in the environment change. While such flexibility is a hallmark of intelligent behaviour, these multi-task scenarios remain an important challenge for machine learning algorithms and neurobiological models alike. Factored representations can enable flexible behaviour by abstracting away general aspects of a task from those prone to change, while nonparametric methods provide a principled way of using similarity to past experiences to guide current behaviour. Here we combine the successor representation (SR), that factors the value of actions into expected outcomes and corresponding rewards, with evaluating task similarity through nonparametric inference and clustering the space of rewards. The proposed algorithm improves SR's transfer capabilities by inverting a generative model over tasks, while also explaining important neurobiological signatures of place cell representation in the hippocampus. It dynamically samples from a flexible number of distinct SR maps while accumulating evidence about the current reward context, and outperforms competing algorithms in settings with both known and unsignalled rewards changes. It reproduces the "flickering" behaviour of hippocampal maps seen when rodents navigate to changing reward locations, and gives a quantitative account of trajectory-dependent hippocampal representations (so-called splitter cells) and their dynamics. We thus provide a novel algorithmic approach for multi-task learning, as well as a common normative framework that links together these different characteristics of the brain's spatial representation.

READ FULL TEXT
research
11/27/2015

Shaping Proto-Value Functions via Rewards

In this paper, we combine task-dependent reward shaping and task-indepen...
research
06/08/2016

Deep Successor Reinforcement Learning

Learning robust value functions given raw observations and rewards is no...
research
06/19/2022

Learning Multi-Task Transferable Rewards via Variational Inverse Reinforcement Learning

Many robotic tasks are composed of a lot of temporally correlated sub-ta...
research
03/31/2022

AKF-SR: Adaptive Kalman Filtering-based Successor Representation

Recent studies in neuroscience suggest that Successor Representation (SR...
research
06/09/2023

Decision Stacks: Flexible Reinforcement Learning via Modular Generative Models

Reinforcement learning presents an attractive paradigm to reason about s...
research
05/12/2022

Representation Learning for Context-Dependent Decision-Making

Humans are capable of adjusting to changing environments flexibly and qu...
research
09/28/2021

A First-Occupancy Representation for Reinforcement Learning

Both animals and artificial agents benefit from state representations th...

Please sign up or login with your details

Forgot password? Click here to reset