A Unified Approach to Dynamic Decision Problems with Asymmetric Information - Part I: Non-Strategic Agents
We study a general class of dynamic multi-agent decision problems with asymmetric information and non-strategic agents, which includes dynamic teams as a special case. When agents are non-strategic, an agent's strategy is known to the other agents. Nevertheless, the agents' strategy choices and beliefs are interdependent over times, a phenomenon known as signaling. We introduce the notions of private information that effectively compresses the agents' information in a mutually consistent manner. Based on the notions of sufficient information, we propose an information state for each agent that is sufficient for decision making purposes. We present instances of dynamic multi-agent decision problems where we can determine an information state with a time-invariant domain for each agent. Furthermore, we present a generalization of the policy-independence property of belief in Partially Observed Markov Decision Processes (POMDP) to dynamic multi-agent decision problems. Within the context of dynamic teams with asymmetric information, the proposed set of information states leads to a sequential decomposition that decouples the interdependence between the agents' strategies and beliefs over time, and enables us to formulate a dynamic program to determine a globally optimal policy via backward induction.
READ FULL TEXT