Improving Policies via Search in Cooperative Partially Observable Games

by   Adam Lerer, et al.

Recent superhuman results in games have largely been achieved in a variety of zero-sum settings, such as Go and Poker, in which agents need to compete against others. However, just like humans, real-world AI systems have to coordinate and communicate with other agents in cooperative partially observable environments as well. These settings commonly require participants to both interpret the actions of others and to act in a way that is informative when being interpreted. Those abilities are typically summarized as theory f mind and are seen as crucial for social interactions. In this paper we propose two different search techniques that can be applied to improve an arbitrary agreed-upon policy in a cooperative partially observable game. The first one, single-agent search, effectively converts the problem into a single agent setting by making all but one of the agents play according to the agreed-upon policy. In contrast, in multi-agent search all agents carry out the same common-knowledge search procedure whenever doing so is computationally feasible, and fall back to playing according to the agreed-upon policy otherwise. We prove that these search procedures are theoretically guaranteed to at least maintain the original performance of the agreed-upon policy (up to a bounded approximation error). In the benchmark challenge problem of Hanabi, our search technique greatly improves the performance of every agent we tested and when applied to a policy trained using RL achieves a new state-of-the-art score of 24.61 / 25 in the game, compared to a previous-best of 24.08 / 25.


page 1

page 2

page 3

page 4


Learning to Cooperate via Policy Search

Cooperative games are those in which both agents share the same payoff s...

Learned Belief Search: Efficiently Improving Policies in Partially Observable Settings

Search is an important tool for computing effective policies in single- ...

Simplified Action Decoder for Deep Multi-Agent Reinforcement Learning

In recent years we have seen fast progress on a number of benchmark prob...

Bayesian Action Decoder for Deep Multi-Agent Reinforcement Learning

When observing the actions of others, humans carry out inferences about ...

Generalized Beliefs for Cooperative AI

Self-play is a common paradigm for constructing solutions in Markov game...

Solving Transition-Independent Multi-agent MDPs with Sparse Interactions (Extended version)

In cooperative multi-agent sequential decision making under uncertainty,...

Human-Agent Cooperation in Bridge Bidding

We introduce a human-compatible reinforcement-learning approach to a coo...

Please sign up or login with your details

Forgot password? Click here to reset