Towards Few-shot Coordination: Revisiting Ad-hoc Teamplay Challenge In the Game of Hanabi

by   Hadi Nekoei, et al.

Cooperative Multi-agent Reinforcement Learning (MARL) algorithms with Zero-Shot Coordination (ZSC) have gained significant attention in recent years. ZSC refers to the ability of agents to coordinate zero-shot (without additional interaction experience) with independently trained agents. While ZSC is crucial for cooperative MARL agents, it might not be possible for complex tasks and changing environments. Agents also need to adapt and improve their performance with minimal interaction with other agents. In this work, we show empirically that state-of-the-art ZSC algorithms have poor performance when paired with agents trained with different learning methods, and they require millions of interaction samples to adapt to these new partners. To investigate this issue, we formally defined a framework based on a popular cooperative multi-agent game called Hanabi to evaluate the adaptability of MARL methods. In particular, we created a diverse set of pre-trained agents and defined a new metric called adaptation regret that measures the agent's ability to efficiently adapt and improve its coordination performance when paired with some held-out pool of partners on top of its ZSC performance. After evaluating several SOTA algorithms using our framework, our experiments reveal that naive Independent Q-Learning (IQL) agents in most cases adapt as quickly as the SOTA ZSC algorithm Off-Belief Learning (OBL). This finding raises an interesting research question: How to design MARL algorithms with high ZSC performance and capability of fast adaptation to unseen partners. As a first step, we studied the role of different hyper-parameters and design choices on the adaptability of current MARL algorithms. Our experiments show that two categories of hyper-parameters controlling the training data diversity and optimization process have a significant impact on the adaptability of Hanabi agents.


page 5

page 6

page 8

page 9


"Other-Play" for Zero-Shot Coordination

We consider the problem of zero-shot coordination - constructing AI agen...

Improving Zero-Shot Coordination Performance Based on Policy Similarity

Over these years, multi-agent reinforcement learning has achieved remark...

Continuous Coordination As a Realistic Scenario for Lifelong Learning

Current deep reinforcement learning (RL) algorithms are still highly tas...

Any-Play: An Intrinsic Augmentation for Zero-Shot Coordination

Cooperative artificial intelligence with human or superhuman proficiency...

Learning to Generalize with Object-centric Agents in the Open World Survival Game Crafter

Reinforcement learning agents must generalize beyond their training expe...

Decentralized Inference via Capability Type Structures in Cooperative Multi-Agent Systems

This work studies the problem of ad hoc teamwork in teams composed of ag...

Know your audience: specializing grounded language models with the game of Dixit

Effective communication requires adapting to the idiosyncratic common gr...

Please sign up or login with your details

Forgot password? Click here to reset