A Structured Prediction Approach for Generalization in Cooperative Multi-Agent Reinforcement Learning

by   Nicolas Carion, et al.

Effective coordination is crucial to solve multi-agent collaborative (MAC) problems. While centralized reinforcement learning methods can optimally solve small MAC instances, they do not scale to large problems and they fail to generalize to scenarios different from those seen during training. In this paper, we consider MAC problems with some intrinsic notion of locality (e.g., geographic proximity) such that interactions between agents and tasks are locally limited. By leveraging this property, we introduce a novel structured prediction approach to assign agents to tasks. At each step, the assignment is obtained by solving a centralized optimization problem (the inference procedure) whose objective function is parameterized by a learned scoring model. We propose different combinations of inference procedures and scoring models able to represent coordination patterns of increasing complexity. The resulting assignment policy can be efficiently learned on small problem instances and readily reused in problems with more agents and tasks (i.e., zero-shot generalization). We report experimental results on a toy search and rescue problem and on several target selection scenarios in StarCraft: Brood War, in which our model significantly outperforms strong rule-based baselines on instances with 5 times more agents and tasks than those seen during training.


Heterogeneous Multi-agent Zero-Shot Coordination by Coevolution

Generating agents that can achieve Zero-Shot Coordination (ZSC) with uns...

ELIGN: Expectation Alignment as a Multi-Agent Intrinsic Reward

Modern multi-agent reinforcement learning frameworks rely on centralized...

Learning of Coordination Policies for Robotic Swarms

Inspired by biological swarms, robotic swarms are envisioned to solve re...

LDSA: Learning Dynamic Subtask Assignment in Cooperative Multi-Agent Reinforcement Learning

Cooperative multi-agent reinforcement learning (MARL) has made prominent...

Continuous Coordination As a Realistic Scenario for Lifelong Learning

Current deep reinforcement learning (RL) algorithms are still highly tas...

Blocks Assemble! Learning to Assemble with Large-Scale Structured Reinforcement Learning

Assembly of multi-part physical structures is both a valuable end produc...

Multi-agent Continual Coordination via Progressive Task Contextualization

Cooperative Multi-agent Reinforcement Learning (MARL) has attracted sign...

Please sign up or login with your details

Forgot password? Click here to reset