Model-Free Learning and Optimal Policy Design in Multi-Agent MDPs Under Probabilistic Agent Dropout

04/24/2023
by   Carmel Fiscko, et al.
0

This work studies a multi-agent Markov decision process (MDP) that can undergo agent dropout and the computation of policies for the post-dropout system based on control and sampling of the pre-dropout system. The controller's objective is to find an optimal policy that maximizes the value of the expected system given a priori knowledge of the agents' dropout probabilities. Finding an optimal policy for any specific dropout realization is a special case of this problem. For MDPs with a certain transition independence and reward separability structure, we assume that removing agents from the system forms a new MDP comprised of the remaining agents with new state and action spaces, transition dynamics that marginalize the removed agents, and rewards that are independent of the removed agents. We first show that under these assumptions, the value of the expected post-dropout system can be represented by a single MDP; this "robust MDP" eliminates the need to evaluate all 2^N realizations of the system, where N denotes the number of agents. More significantly, in a model-free context, it is shown that the robust MDP value can be estimated with samples generated by the pre-dropout system, meaning that robust policies can be found before dropout occurs. This fact is used to propose a policy importance sampling (IS) routine that performs policy evaluation for dropout scenarios while controlling the existing system with good pre-dropout policies. The policy IS routine produces value estimates for both the robust MDP and specific post-dropout system realizations and is justified with exponential confidence bounds. Finally, the utility of this approach is verified in simulation, showing how structural properties of agent dropout can help a controller find good post-dropout policies before dropout occurs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2021

Scalable Planning in Multi-Agent MDPs

Multi-agent Markov Decision Processes (MMDPs) arise in a variety of appl...
research
09/07/2022

On the Near-Optimality of Local Policies in Large Cooperative Multi-Agent Reinforcement Learning

We show that in a cooperative N-agent network, one can design locally ex...
research
11/29/2015

Solving Transition-Independent Multi-agent MDPs with Sparse Interactions (Extended version)

In cooperative multi-agent sequential decision making under uncertainty,...
research
06/24/2011

On Polynomial Sized MDP Succinct Policies

Policies of Markov Decision Processes (MDPs) determine the next action t...
research
07/11/2022

Cluster-Based Control of Transition-Independent MDPs

This work studies the ability of a third-party influencer to control the...
research
01/31/2018

An Incremental Off-policy Search in a Model-free Markov Decision Process Using a Single Sample Path

In this paper, we consider a modified version of the control problem in ...
research
07/21/2020

Flow Sampling: Accurate and Load-balanced Sampling Policies

Software-defined networking simplifies network monitoring by means of pe...

Please sign up or login with your details

Forgot password? Click here to reset