Fast Teammate Adaptation in the Presence of Sudden Policy Change

by   Ziqian Zhang, et al.

In cooperative multi-agent reinforcement learning (MARL), where an agent coordinates with teammate(s) for a shared goal, it may sustain non-stationary caused by the policy change of teammates. Prior works mainly concentrate on the policy change during the training phase or teammates altering cross episodes, ignoring the fact that teammates may suffer from policy change suddenly within an episode, which might lead to miscoordination and poor performance as a result. We formulate the problem as an open Dec-POMDP, where we control some agents to coordinate with uncontrolled teammates, whose policies could be changed within one episode. Then we develop a new framework, fast teammates adaptation (Fastap), to address the problem. Concretely, we first train versatile teammates' policies and assign them to different clusters via the Chinese Restaurant Process (CRP). Then, we train the controlled agent(s) to coordinate with the sampled uncontrolled teammates by capturing their identifications as context for fast adaptation. Finally, each agent applies its local information to anticipate the teammates' context for decision-making accordingly. This process proceeds alternately, leading to a robust policy that can adapt to any teammates during the decentralized execution phase. We show in multiple multi-agent benchmarks that Fastap can achieve superior performance than multiple baselines in stationary and non-stationary scenarios.


page 2

page 5

page 7

page 8


Agent Probing Interaction Policies

Reinforcement learning in a multi agent system is difficult because thes...

Non-Stationary Policy Learning for Multi-Timescale Multi-Agent Reinforcement Learning

In multi-timescale multi-agent reinforcement learning (MARL), agents int...

Interaction-Aware Multi-Agent Reinforcement Learning for Mobile Agents with Individual Goals

In a multi-agent setting, the optimal policy of a single agent is largel...

Multi-agent Deep Reinforcement Learning with Extremely Noisy Observations

Multi-agent reinforcement learning systems aim to provide interacting ag...

Dealing with Non-Stationarity in Multi-Agent Reinforcement Learning via Trust Region Decomposition

Non-stationarity is one thorny issue in multi-agent reinforcement learni...

Human Machine Co-adaption Interface via Cooperation Markov Decision Process System

This paper aims to develop a new human-machine interface to improve reha...

Please sign up or login with your details

Forgot password? Click here to reset