Multi-Player Zero-Sum Markov Games with Networked Separable Interactions

07/13/2023
by   Chanwoo Park, et al.
0

We study a new class of Markov games (MGs), Multi-player Zero-sum Markov Games with Networked separable interactions (MZNMGs), to model the local interaction structure in non-cooperative multi-agent sequential decision-making. We define an MZNMG as a model where the payoffs of the auxiliary games associated with each state are zero-sum and have some separable (i.e., polymatrix) structure across the neighbors over some interaction network. We first identify the necessary and sufficient conditions under which an MG can be presented as an MZNMG, and show that the set of Markov coarse correlated equilibrium (CCE) collapses to the set of Markov Nash equilibrium (NE) in these games, in that the product of per-state marginalization of the former for all players yields the latter. Furthermore, we show that finding approximate Markov stationary CCE in infinite-horizon discounted MZNMGs is -hard, unless the underlying network has a “star topology”. Then, we propose fictitious-play-type dynamics, the classical learning dynamics in normal-form games, for MZNMGs, and establish convergence guarantees to Markov stationary NE under a star-shaped network structure. Finally, in light of the hardness result, we focus on computing a Markov non-stationary NE and provide finite-iteration guarantees for a series of value-iteration-based algorithms. We also provide numerical experiments to corroborate our theoretical results.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset