Empirical Policy Optimization for n-Player Markov Games

10/18/2021
by   Yuanheng Zhu, et al.
0

In single-agent Markov decision processes, an agent can optimize its policy based on the interaction with environment. In multi-player Markov games (MGs), however, the interaction is non-stationary due to the behaviors of other players, so the agent has no fixed optimization objective. In this paper, we treat the evolution of player policies as a dynamical process and propose a novel learning scheme for Nash equilibrium. The core is to evolve one's policy according to not just its current in-game performance, but an aggregation of its performance over history. We show that for a variety of MGs, players in our learning scheme will provably converge to a point that is an approximation to Nash equilibrium. Combined with neural networks, we develop the empirical policy optimization algorithm, that is implemented in a reinforcement-learning framework and runs in a distributed way, with each player optimizing its policy based on own observations. We use two numerical examples to validate the convergence property on small-scale MGs with n≥ 2 players, and a pong example to show the potential of our algorithm on large games.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro