A First-Occupancy Representation for Reinforcement Learning

by   Ted Moskovitz, et al.

Both animals and artificial agents benefit from state representations that support rapid transfer of learning across tasks and which enable them to efficiently traverse their environments to reach rewarding states. The successor representation (SR), which measures the expected cumulative, discounted state occupancy under a fixed policy, enables efficient transfer to different reward structures in an otherwise constant Markovian environment and has been hypothesized to underlie aspects of biological behavior and neural activity. However, in the real world, rewards may move or only be available for consumption once, may shift location, or agents may simply aim to reach goal states as rapidly as possible without the constraint of artificially imposed task horizons. In such cases, the most behaviorally-relevant representation would carry information about when the agent was likely to first reach states of interest, rather than how often it should expect to visit them over a potentially infinite time span. To reflect such demands, we introduce the first-occupancy representation (FR), which measures the expected temporal discount to the first time a state is accessed. We demonstrate that the FR facilitates the selection of efficient paths to desired states, allows the agent, under certain conditions, to plan provably optimal trajectories defined by a sequence of subgoals, and induces similar behavior to animals avoiding threatening stimuli.


page 5

page 7

page 8

page 10

page 17


A State Representation for Diminishing Rewards

A common setting in multitask reinforcement learning (RL) demands that a...

Reinforcement Learning with Unsupervised Auxiliary Tasks

Deep reinforcement learning agents have achieved state-of-the-art result...

Multi-Source Transfer Learning for Deep Model-Based Reinforcement Learning

Recent progress in deep model-based reinforcement learning allows agents...

Specifying Behavior Preference with Tiered Reward Functions

Reinforcement-learning agents seek to maximize a reward signal through e...

ROSARL: Reward-Only Safe Reinforcement Learning

An important problem in reinforcement learning is designing agents that ...

Inferred successor maps for better transfer learning

Humans and animals show remarkable flexibility in adjusting their behavi...

A Correctness Result for Synthesizing Plans With Loops in Stochastic Domains

Finite-state controllers (FSCs), such as plans with loops, are powerful ...

Please sign up or login with your details

Forgot password? Click here to reset