PECAN: Leveraging Policy Ensemble for Context-Aware Zero-Shot Human-AI Coordination

by   Xingzhou Lou, et al.

Zero-shot human-AI coordination holds the promise of collaborating with humans without human data. Prevailing methods try to train the ego agent with a population of partners via self-play. However, this kind of method suffers from two problems: 1) The diversity of a population with finite partners is limited, thereby limiting the capacity of the trained ego agent to collaborate with a novel human; 2) Current methods only provide a common best response for every partner in the population, which may result in poor zero-shot coordination performance with a novel partner or humans. To address these issues, we first propose the policy ensemble method to increase the diversity of partners in the population, and then develop a context-aware method enabling the ego agent to analyze and identify the partner's potential policy primitives so that it can take different actions accordingly. In this way, the ego agent is able to learn more universal cooperative behaviors for collaborating with diverse partners. We conduct experiments on the Overcooked environment, and evaluate the zero-shot human-AI coordination performance of our method with both behavior-cloned human proxies and real humans. The results demonstrate that our method significantly increases the diversity of partners and enables ego agents to learn more diverse behaviors than baselines, thus achieving state-of-the-art performance in all scenarios.


Adaptive Coordination in Social Embodied Rearrangement

We present the task of "Social Rearrangement", consisting of cooperative...

Improving Zero-Shot Coordination Performance Based on Policy Similarity

Over these years, multi-agent reinforcement learning has achieved remark...

On the Utility of Learning about Humans for Human-AI Coordination

While we would like agents that can coordinate with humans, current algo...

Learning to Coordinate with Humans using Action Features

An unaddressed challenge in human-AI coordination is to enable AI agents...

Equivariant Networks for Zero-Shot Coordination

Successful coordination in Dec-POMDPs requires agents to adopt robust st...

Heterogeneous Social Value Orientation Leads to Meaningful Diversity in Sequential Social Dilemmas

In social psychology, Social Value Orientation (SVO) describes an indivi...

Any-Play: An Intrinsic Augmentation for Zero-Shot Coordination

Cooperative artificial intelligence with human or superhuman proficiency...

Please sign up or login with your details

Forgot password? Click here to reset