Population-size-Aware Policy Optimization for Mean-Field Games

by   Pengdeng Li, et al.

In this work, we attempt to bridge the two fields of finite-agent and infinite-agent games, by studying how the optimal policies of agents evolve with the number of agents (population size) in mean-field games, an agent-centric perspective in contrast to the existing works focusing typically on the convergence of the empirical distribution of the population. To this end, the premise is to obtain the optimal policies of a set of finite-agent games with different population sizes. However, either deriving the closed-form solution for each game is theoretically intractable, training a distinct policy for each game is computationally intensive, or directly applying the policy trained in a game to other games is sub-optimal. We address these challenges through the Population-size-Aware Policy Optimization (PAPO). Our contributions are three-fold. First, to efficiently generate efficient policies for games with different population sizes, we propose PAPO, which unifies two natural options (augmentation and hypernetwork) and achieves significantly better performance. PAPO consists of three components: i) the population-size encoding which transforms the original value of population size to an equivalent encoding to avoid training collapse, ii) a hypernetwork to generate a distinct policy for each game conditioned on the population size, and iii) the population size as an additional input to the generated policy. Next, we construct a multi-task-based training procedure to efficiently train the neural networks of PAPO by sampling data from multiple games with different population sizes. Finally, extensive experiments on multiple environments show the significant superiority of PAPO over baselines, and the analysis of the evolution of the generated policies further deepens our understanding of the two fields of finite-agent and infinite-agent games.


Mean Field Multi-Agent Reinforcement Learning

Existing multi-agent reinforcement learning methods are limited typicall...

Learning Meta Representations for Agents in Multi-Agent Reinforcement Learning

In multi-agent reinforcement learning, the behaviors that agents learn i...

Oracle-free Reinforcement Learning in Mean-Field Games along a Single Sample Path

We consider online reinforcement learning in Mean-Field Games. In contra...

NeuPL: Neural Population Learning

Learning in strategy games (e.g. StarCraft, poker) requires the discover...

Modeling Conceptual Understanding in Image Reference Games

An agent who interacts with a wide population of other agents needs to b...

Master equation of discrete-time Stackelberg mean field games with multiple leaders

In this paper, we consider a discrete-time Stackelberg mean field game w...

Finite and Infinite Population Spatial Rock-Paper-Scissors in One Dimension

We derive both the finite and infinite population spatial replicator dyn...

Please sign up or login with your details

Forgot password? Click here to reset