Logit-Q Learning in Markov Games

05/26/2022
by   Muhammed O. Sayin, et al.
0

We present new independent learning dynamics provably converging to an efficient equilibrium (also known as optimal equilibrium) maximizing the social welfare in infinite-horizon discounted identical-interest Markov games (MG), beyond the recent concentration of progress on provable convergence to some (possibly inefficient) equilibrium. The dynamics are independent in the sense that agents take actions without considering the others' objectives in their decision-making process, and their decisions are consistent with their objectives based on behavioral learning models. Independent and simultaneous adaptation of agents in an MG poses the key challenges: i) possible convergence to an inefficient equilibrium and ii) possible non-stationarity of the environment from a single agent's viewpoint. We address the former by generalizing the log-linear learning dynamics to MG settings and address the latter through the play-in-rounds scheme presented. Particularly, in an MG, agents play (normal-form) stage games associated with the state visited based on their continuation payoff estimates. We let the agents play these stage games in rounds such that their continuation payoff estimates get updated only at the end of the round. This makes these stage games stationary within each round. Hence, the dynamics approximate the value iteration and the convergence to the social optimum of the underlying MG follows.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset