DeepAI AI Chat
Log In Sign Up

PsiPhi-Learning: Reinforcement Learning with Demonstrations using Successor Features and Inverse Temporal Difference Learning

by   Angelos Filos, et al.

We study reinforcement learning (RL) with no-reward demonstrations, a setting in which an RL agent has access to additional data from the interaction of other agents with the same environment. However, it has no access to the rewards or goals of these agents, and their objectives and levels of expertise may vary widely. These assumptions are common in multi-agent settings, such as autonomous driving. To effectively use this data, we turn to the framework of successor features. This allows us to disentangle shared features and dynamics of the environment from agent-specific rewards and policies. We propose a multi-task inverse reinforcement learning (IRL) algorithm, called inverse temporal difference learning (ITD), that learns shared state features, alongside per-agent successor features and preference vectors, purely from demonstrations without reward labels. We further show how to seamlessly integrate ITD with learning from online environment interactions, arriving at a novel algorithm for reinforcement learning with demonstrations, called ΨΦ-learning (pronounced `Sci-Fi'). We provide empirical evidence for the effectiveness of ΨΦ-learning as a method for improving RL, IRL, imitation, and few-shot transfer, and derive worst-case bounds for its performance in zero-shot transfer to new tasks.


page 5

page 7

page 14

page 21


BC-IRL: Learning Generalizable Reward Functions from Demonstrations

How well do reward functions learned with inverse reinforcement learning...

Read and Reap the Rewards: Learning to Play Atari with the Help of Instruction Manuals

High sample complexity has long been a challenge for RL. On the other ha...

DEFENDER: DTW-Based Episode Filtering Using Demonstrations for Enhancing RL Safety

Deploying reinforcement learning agents in the real world can be challen...

Adaptive action supervision in reinforcement learning from real-world multi-agent demonstrations

Modeling of real-world biological multi-agents is a fundamental problem ...

CLIC: Curriculum Learning and Imitation for feature Control in non-rewarding environments

In this paper, we propose an unsupervised reinforcement learning agent c...

Inferring and Conveying Intentionality: Beyond Numerical Rewards to Logical Intentions

Shared intentionality is a critical component in developing conscious AI...

Does Zero-Shot Reinforcement Learning Exist?

A zero-shot RL agent is an agent that can solve any RL task in a given e...