Cooperative Actor-Critic via TD Error Aggregation

by   Martin Figura, et al.

In decentralized cooperative multi-agent reinforcement learning, agents can aggregate information from one another to learn policies that maximize a team-average objective function. Despite the willingness to cooperate with others, the individual agents may find direct sharing of information about their local state, reward, and value function undesirable due to privacy issues. In this work, we introduce a decentralized actor-critic algorithm with TD error aggregation that does not violate privacy issues and assumes that communication channels are subject to time delays and packet dropouts. The cost we pay for making such weak assumptions is an increased communication burden for every agent as measured by the dimension of the transmitted data. Interestingly, the communication burden is only quadratic in the graph size, which renders the algorithm applicable in large networks. We provide a convergence analysis under diminishing step size to verify that the agents maximize the team-average objective function.


page 1

page 2

page 3

page 4


Resilient Consensus-based Multi-agent Reinforcement Learning

Adversarial attacks during training can strongly influence the performan...

F2A2: Flexible Fully-decentralized Approximate Actor-critic for Cooperative Multi-agent Reinforcement Learning

Traditional centralized multi-agent reinforcement learning (MARL) algori...

Multi-agent Natural Actor-critic Reinforcement Learning Algorithms

Both single-agent and multi-agent actor-critic algorithms are an importa...

Fully Decentralized Multi-Agent Reinforcement Learning with Networked Agents

We consider the problem of fully decentralized multi-agent reinforcement...

Communication-Efficient Actor-Critic Methods for Homogeneous Markov Games

Recent success in cooperative multi-agent reinforcement learning (MARL) ...

Sample and Communication-Efficient Decentralized Actor-Critic Algorithms with Finite-Time Analysis

Actor-critic (AC) algorithms have been widely adopted in decentralized m...

Finite-Time Analysis of Fully Decentralized Single-Timescale Actor-Critic

Decentralized Actor-Critic (AC) algorithms have been widely utilized for...

Please sign up or login with your details

Forgot password? Click here to reset