Improved Cooperation by Exploiting a Common Signal

by   Panayiotis Danassis, et al.

Can artificial agents benefit from human conventions? Human societies manage to successfully self-organize and resolve the tragedy of the commons in common-pool resources, in spite of the bleak prediction of non-cooperative game theory. On top of that, real-world problems are inherently large-scale and of low observability. One key concept that facilitates human coordination in such settings is the use of conventions. Inspired by human behavior, we investigate the learning dynamics and emergence of temporal conventions, focusing on common-pool resources. Extra emphasis was given in designing a realistic evaluation setting: (a) environment dynamics are modeled on real-world fisheries, (b) we assume decentralized learning, where agents can observe only their own history, and (c) we run large-scale simulations (up to 64 agents). Uncoupled policies and low observability make cooperation hard to achieve; as the number of agents grow, the probability of taking a correct gradient direction decreases exponentially. By introducing an arbitrary common signal (e.g., date, time, or any periodic set of numbers) as a means to couple the learning process, we show that temporal conventions can emerge and agents reach sustainable harvesting strategies. The introduction of the signal consistently improves the social welfare (by 258 environmental parameters where sustainability can be achieved (by 46 average, up to 300 13


Signal Instructed Coordination in Cooperative Multi-agent Reinforcement Learning

In many real-world problems, a team of agents need to collaborate to max...

A multi-agent reinforcement learning model of common-pool resource appropriation

Humanity faces numerous problems of common-pool resource appropriation. ...

Stubborn: An Environment for Evaluating Stubbornness between Agents with Aligned Incentives

Recent research in multi-agent reinforcement learning (MARL) has shown s...

Neural MMO: A Massively Multiagent Game Environment for Training and Evaluating Intelligent Agents

The emergence of complex life on Earth is often attributed to the arms r...

Cooperator driven oscillation in a time-delayed feedback-evolving game

Considering feedback of collective actions of cooperation on common reso...

Similarity-based Cooperation

As machine learning agents act more autonomously in the world, they will...

Cooperative control of environmental extremes by artificial intelligent agents

Humans have been able to tackle biosphere complexities by acting as ecos...

Please sign up or login with your details

Forgot password? Click here to reset