Learning Shared Representations in Multi-task Reinforcement Learning

03/07/2016
by   Diana Borsa, et al.
0

We investigate a paradigm in multi-task reinforcement learning (MT-RL) in which an agent is placed in an environment and needs to learn to perform a series of tasks, within this space. Since the environment does not change, there is potentially a lot of common ground amongst tasks and learning to solve them individually seems extremely wasteful. In this paper, we explicitly model and learn this shared structure as it arises in the state-action value space. We will show how one can jointly learn optimal value-functions by modifying the popular Value-Iteration and Policy-Iteration procedures to accommodate this shared representation assumption and leverage the power of multi-task supervised learning. Finally, we demonstrate that the proposed model and training procedures, are able to infer good value functions, even under low samples regimes. In addition to data efficiency, we will show in our analysis, that learning abstractions of the state space jointly across tasks leads to more robust, transferable representations with the potential for better generalization. this shared representation assumption and leverage the power of multi-task supervised learning. Finally, we demonstrate that the proposed model and training procedures, are able to infer good value functions, even under low samples regimes. In addition to data efficiency, we will show in our analysis, that learning abstractions of the state space jointly across tasks leads to more robust, transferable representations with the potential for better generalization.

READ FULL TEXT

page 6

page 8

page 9

page 12

research
09/12/2018

Combined Reinforcement Learning via Abstract Representations

In the quest for efficient and robust reinforcement learning methods, bo...
research
07/11/2019

A Model-based Approach for Sample-efficient Multi-task Reinforcement Learning

The aim of multi-task reinforcement learning is two-fold: (1) efficientl...
research
10/05/2020

Randomized Value Functions via Posterior State-Abstraction Sampling

State abstraction has been an essential tool for dramatically improving ...
research
11/15/2021

Modular Networks Prevent Catastrophic Interference in Model-Based Multi-Task Reinforcement Learning

In a multi-task reinforcement learning setting, the learner commonly ben...
research
02/08/2020

Learning State Abstractions for Transfer in Continuous Control

Can simple algorithms with a good representation solve challenging reinf...
research
02/16/2020

TempLe: Learning Template of Transitions for Sample Efficient Multi-task RL

Transferring knowledge among various environments is important to effici...
research
07/01/2019

On mechanisms for transfer using landmark value functions in multi-task lifelong reinforcement learning

Transfer learning across different reinforcement learning (RL) tasks is ...

Please sign up or login with your details

Forgot password? Click here to reset