Provably Learning Nash Policies in Constrained Markov Potential Games

by   Pragnya Alatur, et al.

Multi-agent reinforcement learning (MARL) addresses sequential decision-making problems with multiple agents, where each agent optimizes its own objective. In many real-world instances, the agents may not only want to optimize their objectives, but also ensure safe behavior. For example, in traffic routing, each car (agent) aims to reach its destination quickly (objective) while avoiding collisions (safety). Constrained Markov Games (CMGs) are a natural formalism for safe MARL problems, though generally intractable. In this work, we introduce and study Constrained Markov Potential Games (CMPGs), an important class of CMGs. We first show that a Nash policy for CMPGs can be found via constrained optimization. One tempting approach is to solve it by Lagrangian-based primal-dual methods. As we show, in contrast to the single-agent setting, however, CMPGs do not satisfy strong duality, rendering such approaches inapplicable and potentially unsafe. To solve the CMPG problem, we propose our algorithm Coordinate-Ascent for CMPGs (CA-CMPG), which provably converges to a Nash policy in tabular, finite-horizon CMPGs. Furthermore, we provide the first sample complexity bounds for learning Nash policies in unknown CMPGs, and, which under additional assumptions, guarantee safe exploration.


page 1

page 2

page 3

page 4


Provably Efficient Generalized Lagrangian Policy Optimization for Safe Multi-Agent Reinforcement Learning

We examine online safe multi-agent reinforcement learning using constrai...

Markov Games with Decoupled Dynamics: Price of Anarchy and Sample Complexity

This paper studies the finite-time horizon Markov games where the agents...

A Sharp Analysis of Model-based Reinforcement Learning with Self-Play

Model-based algorithms—algorithms that decouple learning of the model an...

Cancellation-Free Regret Bounds for Lagrangian Approaches in Constrained Markov Decision Processes

Constrained Markov Decision Processes (CMDPs) are one of the common ways...

Non-cooperative Multi-agent Systems with Exploring Agents

Multi-agent learning is a challenging problem in machine learning that h...

Scalable Primal-Dual Actor-Critic Method for Safe Multi-Agent RL with General Utilities

We investigate safe multi-agent reinforcement learning, where agents see...

Near-Optimal Multi-Agent Learning for Safe Coverage Control

In multi-agent coverage control problems, agents navigate their environm...

Please sign up or login with your details

Forgot password? Click here to reset