DeepAI AI Chat
Log In Sign Up

Provably Efficient Causal Model-Based Reinforcement Learning for Systematic Generalization

by   Mirco Mutti, et al.

In the sequential decision making setting, an agent aims to achieve systematic generalization over a large, possibly infinite, set of environments. Such environments are modeled as discrete Markov decision processes with both states and actions represented through a feature vector. The underlying structure of the environments allows the transition dynamics to be factored into two components: one that is environment-specific and another one that is shared. Consider a set of environments that share the laws of motion as an illustrative example. In this setting, the agent can take a finite amount of reward-free interactions from a subset of these environments. The agent then must be able to approximately solve any planning task defined over any environment in the original set, relying on the above interactions only. Can we design a provably efficient algorithm that achieves this ambitious goal of systematic generalization? In this paper, we give a partially positive answer to this question. First, we provide the first tractable formulation of systematic generalization by employing a causal viewpoint. Then, under specific structural assumptions, we provide a simple learning algorithm that allows us to guarantee any desired planning error up to an unavoidable sub-optimality term, while showcasing a polynomial sample complexity.


Model-based Multi-Agent Reinforcement Learning with Cooperative Prioritized Sweeping

We present a new model-based reinforcement learning algorithm, Cooperati...

Semi-Infinitely Constrained Markov Decision Processes and Efficient Reinforcement Learning

We propose a novel generalization of constrained Markov decision process...

Provably Efficient Representation Learning with Tractable Planning in Low-Rank POMDP

In this paper, we study representation learning in partially observable ...

The Sandbox Environment for Generalizable Agent Research (SEGAR)

A broad challenge of research on generalization for sequential decision-...

Reactive Reinforcement Learning in Asynchronous Environments

The relationship between a reinforcement learning (RL) agent and an asyn...

Finding Counterfactually Optimal Action Sequences in Continuous State Spaces

Humans performing tasks that involve taking a series of multiple depende...

Counterfactual equivalence for POMDPs, and underlying deterministic environments

Partially Observable Markov Decision Processes (POMDPs) are rich environ...