Evaluation of Human-AI Teams for Learned and Rule-Based Agents in Hanabi

07/15/2021
by   Ho Chit Siu, et al.
0

Deep reinforcement learning has generated superhuman AI in competitive games such as Go and StarCraft. Can similar learning techniques create a superior AI teammate for human-machine collaborative games? Will humans prefer AI teammates that improve objective team performance or those that improve subjective metrics of trust? In this study, we perform a single-blind evaluation of teams of humans and AI agents in the cooperative card game Hanabi, with both rule-based and learning-based agents. In addition to the game score, used as an objective metric of the human-AI team performance, we also quantify subjective measures of the human's perceived performance, teamwork, interpretability, trust, and overall preference of AI teammate. We find that humans have a clear preference toward a rule-based AI teammate (SmartBot) over a state-of-the-art learning-based AI teammate (Other-Play) across nearly all subjective metrics, and generally view the learning-based agent negatively, despite no statistical difference in the game score. This result has implications for future AI design and reinforcement learning benchmarking, highlighting the need to incorporate subjective metrics of human-AI teaming rather than a singular focus on objective task performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/31/2022

Warmth and competence in human-agent cooperation

Interaction and cooperation with humans are overarching aspirations of a...
research
08/17/2017

Evaluating Visual Conversational Agents via Cooperative Human-AI Games

As AI continues to advance, human-AI teams are inevitable. However, prog...
research
09/18/2023

Mechanic Maker 2.0: Reinforcement Learning for Evaluating Generated Rules

Automated game design (AGD), the study of automatically generating game ...
research
11/18/2021

Reinforcement Learning on Human Decision Models for Uniquely Collaborative AI Teammates

In 2021 the Johns Hopkins University Applied Physics Laboratory held an ...
research
09/14/2017

Towards personalized human AI interaction - adapting the behavior of AI agents using neural signatures of subjective interest

Reinforcement Learning AI commonly uses reward/penalty signals that are ...
research
08/22/2023

Building Better Human-Agent Teams: Tradeoffs in Helpfulness and Humanness in Voice

We manipulate the helpfulness and voice type of a voice-only agent teamm...

Please sign up or login with your details

Forgot password? Click here to reset