Optimistic Linear Support and Successor Features as a Basis for Optimal Policy Transfer

by   Lucas N. Alegre, et al.

In many real-world applications, reinforcement learning (RL) agents might have to solve multiple tasks, each one typically modeled via a reward function. If reward functions are expressed linearly, and the agent has previously learned a set of policies for different tasks, successor features (SFs) can be exploited to combine such policies and identify reasonable solutions for new problems. However, the identified solutions are not guaranteed to be optimal. We introduce a novel algorithm that addresses this limitation. It allows RL agents to combine existing policies and directly identify optimal policies for arbitrary new problems, without requiring any further interactions with the environment. We first show (under mild assumptions) that the transfer learning problem tackled by SFs is equivalent to the problem of learning to optimize multiple objectives in RL. We then introduce an SF-based extension of the Optimistic Linear Support algorithm to learn a set of policies whose SFs form a convex coverage set. We prove that policies in this set can be combined via generalized policy improvement to construct optimal behaviors for any new linearly-expressible tasks, without requiring any additional training samples. We empirically show that our method outperforms state-of-the-art competing algorithms both in discrete and continuous domains under value function approximation.


Sample-Efficient Multi-Objective Learning via Generalized Policy Improvement Prioritization

Multi-objective reinforcement learning (MORL) algorithms tackle sequenti...

Continuous Deep Q-Learning with Simulator for Stabilization of Uncertain Discrete-Time Systems

Applications of reinforcement learning (RL) to stabilization problems of...

Constructing a Good Behavior Basis for Transfer using Generalized Policy Updates

We study the problem of learning a good set of policies, so that when co...

Universal Successor Features Approximators

The ability of a reinforcement learning (RL) agent to learn about many r...

Xi-Learning: Successor Feature Transfer Learning for General Reward Functions

Transfer in Reinforcement Learning aims to improve learning performance ...

Fast Task Inference with Variational Intrinsic Successor Features

It has been established that diverse behaviors spanning the controllable...

How Private Is Your RL Policy? An Inverse RL Based Analysis Framework

Reinforcement Learning (RL) enables agents to learn how to perform vario...

Please sign up or login with your details

Forgot password? Click here to reset