Learning Zero-sum Stochastic Games with Posterior Sampling
In this paper, we propose Posterior Sampling Reinforcement Learning for Zero-sum Stochastic Games (PSRL-ZSG), the first online learning algorithm that achieves Bayesian regret bound of O(HS√(AT)) in the infinite-horizon zero-sum stochastic games with average-reward criterion. Here H is an upper bound on the span of the bias function, S is the number of states, A is the number of joint actions and T is the horizon. We consider the online setting where the opponent can not be controlled and can take any arbitrary time-adaptive history-dependent strategy. This improves the best existing regret bound of O(√(DS^2AT^2)) by Wei et. al., 2017 under the same assumption and matches the theoretical lower bound in A and T.
READ FULL TEXT