Learning in Cooperative Multiagent Systems Using Cognitive and Machine Models

by   Thuy Ngoc Nguyen, et al.

Developing effective Multi-Agent Systems (MAS) is critical for many applications requiring collaboration and coordination with humans. Despite the rapid advance of Multi-Agent Deep Reinforcement Learning (MADRL) in cooperative MAS, one major challenge is the simultaneous learning and interaction of independent agents in dynamic environments in the presence of stochastic rewards. State-of-the-art MADRL models struggle to perform well in Coordinated Multi-agent Object Transportation Problems (CMOTPs), wherein agents must coordinate with each other and learn from stochastic rewards. In contrast, humans often learn rapidly to adapt to nonstationary environments that require coordination among people. In this paper, motivated by the demonstrated ability of cognitive models based on Instance-Based Learning Theory (IBLT) to capture human decisions in many dynamic decision making tasks, we propose three variants of Multi-Agent IBL models (MAIBL). The idea of these MAIBL algorithms is to combine the cognitive mechanisms of IBLT and the techniques of MADRL models to deal with coordination MAS in stochastic environments from the perspective of independent learners. We demonstrate that the MAIBL models exhibit faster learning and achieve better coordination in a dynamic CMOTP task with various settings of stochastic rewards compared to current MADRL models. We discuss the benefits of integrating cognitive insights into MADRL models.


Promoting Coordination Through Electing First-moveAgent in Multi-Agent Reinforcement Learning

Learning to coordinate among multiple agents is an essential problem in ...

An Auction-based Coordination Strategy for Task-Constrained Multi-Agent Stochastic Planning with Submodular Rewards

In many domains such as transportation and logistics, search and rescue,...

On the Necessity and Design of Coordination Mechanism for Cognitive Autonomous Networks

Cognitive Autonomous Networks (CAN) are promoted to advance Self Organiz...

Learning Generalizable Risk-Sensitive Policies to Coordinate in Decentralized Multi-Agent General-Sum Games

While various multi-agent reinforcement learning methods have been propo...

Coordinated Exploration via Intrinsic Rewards for Multi-Agent Reinforcement Learning

Sparse rewards are one of the most important challenges in reinforcement...

Relational Forward Models for Multi-Agent Learning

The behavioral dynamics of multi-agent systems have a rich and orderly s...

Learning Adaptable Risk-Sensitive Policies to Coordinate in Multi-Agent General-Sum Games

In general-sum games, the interaction of self-interested learning agents...

Please sign up or login with your details

Forgot password? Click here to reset