The Partially Observable Asynchronous Multi-Agent Cooperation Challenge

by   Meng Yao, et al.

Multi-agent reinforcement learning (MARL) has received increasing attention for its applications in various domains. Researchers have paid much attention on its partially observable and cooperative settings for meeting real-world requirements. For testing performance of different algorithms, standardized environments are designed such as the StarCraft Multi-Agent Challenge, which is one of the most successful MARL benchmarks. To our best knowledge, most of current environments are synchronous, where agents execute actions in the same pace. However, heterogeneous agents usually have their own action spaces and there is no guarantee for actions from different agents to have the same executed cycle, which leads to asynchronous multi-agent cooperation. Inspired from the Wargame, a confrontation game between two armies abstracted from real world environment, we propose the first Partially Observable Asynchronous multi-agent Cooperation challenge (POAC) for the MARL community. Specifically, POAC supports two teams of heterogeneous agents to fight with each other, where an agent selects actions based on its own observations and cooperates asynchronously with its allies. Moreover, POAC is a light weight, flexible and easy to use environment, which can be configured by users to meet different experimental requirements such as self-play model, human-AI model and so on. Along with our benchmark, we offer six game scenarios of varying difficulties with the built-in rule-based AI as opponents. Finally, since most MARL algorithms are designed for synchronous agents, we revise several representatives to meet the asynchronous setting, and the relatively poor experimental results validate the challenge of POAC. Source code is released in <>.


page 1

page 2

page 3

page 5

page 6

page 7


More Like Real World Game Challenge for Partially Observable Multi-Agent Cooperation

Some standardized environments have been designed for partially observab...

The StarCraft Multi-Agent Challenge

In the last few years, deep multi-agent reinforcement learning (RL) has ...

Normative Disagreement as a Challenge for Cooperative AI

Cooperation in settings where agents have both common and conflicting in...

Asynchronous Multi-Agent Reinforcement Learning for Efficient Real-Time Multi-Robot Cooperative Exploration

We consider the problem of cooperative exploration where multiple robots...

POGEMA: Partially Observable Grid Environment for Multiple Agents

We introduce POGEMA ( a sandbox...

Copiloting Autonomous Multi-Robot Missions: A Game-inspired Supervisory Control Interface

Real-world deployment of new technology and capabilities can be daunting...

Finding Friend and Foe in Multi-Agent Games

Recent breakthroughs in AI for multi-agent games like Go, Poker, and Dot...

Please sign up or login with your details

Forgot password? Click here to reset