Reinforcement Learning in Factored Action Spaces using Tensor Decompositions

by   Anuj Mahajan, et al.
University of Oxford

We present an extended abstract for the previously published work TESSERACT [Mahajan et al., 2021], which proposes a novel solution for Reinforcement Learning (RL) in large, factored action spaces using tensor decompositions. The goal of this abstract is twofold: (1) To garner greater interest amongst the tensor research community for creating methods and analysis for approximate RL, (2) To elucidate the generalised setting of factored action spaces where tensor decompositions can be used. We use cooperative multi-agent reinforcement learning scenario as the exemplary setting where the action space is naturally factored across agents and learning becomes intractable without resorting to approximation on the underlying hypothesis space for candidate solutions.


page 1

page 2

page 3

page 4


Model based Multi-agent Reinforcement Learning with Tensor Decompositions

A challenge in multi-agent reinforcement learning is to be able to gener...

Generalising Discrete Action Spaces with Conditional Action Trees

There are relatively few conventions followed in reinforcement learning ...

Tesseract: Tensorised Actors for Multi-Agent Reinforcement Learning

Reinforcement Learning in large action spaces is a challenging problem. ...

Tensor Decompositions in Deep Learning

The paper surveys the topic of tensor decompositions in modern machine l...

Toward Real-Time Decentralized Reinforcement Learning using Finite Support Basis Functions

This paper addresses the design and implementation of complex Reinforcem...

Scalable Planning and Learning for Multiagent POMDPs: Extended Version

Online, sample-based planning algorithms for POMDPs have shown great pro...

Control-Tutored Reinforcement Learning: an application to the Herding Problem

In this extended abstract we introduce a novel control-tutored Q-learni...

Please sign up or login with your details

Forgot password? Click here to reset