Scalable Primal-Dual Actor-Critic Method for Safe Multi-Agent RL with General Utilities
We investigate safe multi-agent reinforcement learning, where agents seek to collectively maximize an aggregate sum of local objectives while satisfying their own safety constraints. The objective and constraints are described by general utilities, i.e., nonlinear functions of the long-term state-action occupancy measure, which encompass broader decision-making goals such as risk, exploration, or imitations. The exponential growth of the state-action space size with the number of agents presents challenges for global observability, further exacerbated by the global coupling arising from agents' safety constraints. To tackle this issue, we propose a primal-dual method utilizing shadow reward and κ-hop neighbor truncation under a form of correlation decay property, where κ is the communication radius. In the exact setting, our algorithm converges to a first-order stationary point (FOSP) at the rate of 𝒪(T^-2/3). In the sample-based setting, we demonstrate that, with high probability, our algorithm requires 𝒪(ϵ^-3.5) samples to achieve an ϵ-FOSP with an approximation error of 𝒪(ϕ_0^2κ), where ϕ_0∈ (0,1). Finally, we demonstrate the effectiveness of our model through extensive numerical experiments.
READ FULL TEXT