Belief and Truth in Hypothesised Behaviours

by   Stefano V. Albrecht, et al.

There is a long history in game theory on the topic of Bayesian or "rational" learning, in which each player maintains beliefs over a set of alternative behaviours, or types, for the other players. This idea has gained increasing interest in the artificial intelligence (AI) community, where it is used as a method to control a single agent in a system composed of multiple agents with unknown behaviours. The idea is to hypothesise a set of types, each specifying a possible behaviour for the other agents, and to plan our own actions with respect to those types which we believe are most likely, given the observed actions of the agents. The game theory literature studies this idea primarily in the context of equilibrium attainment. In contrast, many AI applications have a focus on task completion and payoff maximisation. With this perspective in mind, we identify and address a spectrum of questions pertaining to belief and truth in hypothesised types. We formulate three basic ways to incorporate evidence into posterior beliefs and show when the resulting beliefs are correct, and when they may fail to be correct. Moreover, we demonstrate that prior beliefs can have a significant impact on our ability to maximise payoffs in the long-term, and that they can be computed automatically with consistent performance effects. Furthermore, we analyse the conditions under which we are able complete our task optimally, despite inaccuracies in the hypothesised types. Finally, we show how the correctness of hypothesised types can be ascertained during the interaction via an automated statistical analysis.


page 22

page 36

page 37


An Empirical Study on the Practical Impact of Prior Beliefs over Policy Types

Many multiagent applications require an agent to learn quickly how to in...

Reasoning about Hypothetical Agent Behaviours and their Parameters

Agents can achieve effective interaction with previously unknown other a...

On Convergence and Optimality of Best-Response Learning with Policy Types in Multiagent Systems

While many multiagent algorithms are designed for homogeneous systems (i...

Toward an Ethics of AI Belief

Philosophical research in AI has hitherto largely focused on the ethics ...

Impact of different belief facets on agents' decision – a refined cognitive architecture

This paper presents a conceptual refinement of agent cognitive architect...

A Belief Model for Conflicting and Uncertain Evidence – Connecting Dempster-Shafer Theory and the Topology of Evidence

One problem to solve in the context of information fusion, decision-maki...

Interpretable Goal Recognition in the Presence of Occluded Factors for Autonomous Vehicles

Recognising the goals or intentions of observed vehicles is a key step t...

Please sign up or login with your details

Forgot password? Click here to reset