Planning Not to Talk: Multiagent Systems that are Robust to Communication Loss

by   Mustafa O. Karabag, et al.

In a cooperative multiagent system, a collection of agents executes a joint policy in order to achieve some common objective. The successful deployment of such systems hinges on the availability of reliable inter-agent communication. However, many sources of potential disruption to communication exist in practice, such as radio interference, hardware failure, and adversarial attacks. In this work, we develop joint policies for cooperative multiagent systems that are robust to potential losses in communication. More specifically, we develop joint policies for cooperative Markov games with reach-avoid objectives. First, we propose an algorithm for the decentralized execution of joint policies during periods of communication loss. Next, we use the total correlation of the state-action process induced by a joint policy as a measure of the intrinsic dependencies between the agents. We then use this measure to lower-bound the performance of a joint policy when communication is lost. Finally, we present an algorithm that maximizes a proxy to this lower bound in order to synthesize minimum-dependency joint policies that are robust to communication loss. Numerical experiments show that the proposed minimum-dependency policies require minimal coordination between the agents while incurring little to no loss in performance; the total correlation value of the synthesized policy is one fifth of the total correlation value of the baseline policy which does not take potential communication losses into account. As a result, the performance of the minimum-dependency policies remains consistently high regardless of whether or not communication is available. By contrast, the performance of the baseline policy decreases by twenty percent when communication is lost.


page 4

page 25

page 26

page 28


Differential Privacy in Cooperative Multiagent Planning

Privacy-aware multiagent systems must protect agents' sensitive data whi...

Certifiably Robust Policy Learning against Adversarial Communication in Multi-agent Systems

Communication is important in many multi-agent reinforcement learning (M...

Context-Aware Bayesian Network Actor-Critic Methods for Cooperative Multi-Agent Reinforcement Learning

Executing actions in a correlated manner is a common strategy for human ...

Resilient Consensus-based Multi-agent Reinforcement Learning

Adversarial attacks during training can strongly influence the performan...

Iterated Reasoning with Mutual Information in Cooperative and Byzantine Decentralized Teaming

Information sharing is key in building team cognition and enables coordi...

MA-Dreamer: Coordination and communication through shared imagination

Multi-agent RL is rendered difficult due to the non-stationary nature of...

Neurosymbolic Transformers for Multi-Agent Communication

We study the problem of inferring communication structures that can solv...

Please sign up or login with your details

Forgot password? Click here to reset