Reinforcement Learning with Attention that Works: A Self-Supervised Approach

04/06/2019
by   Anthony Manchin, et al.
14

Attention models have had a significant positive impact on deep learning across a range of tasks. However previous attempts at integrating attention with reinforcement learning have failed to produce significant improvements. We propose the first combination of self attention and reinforcement learning that is capable of producing significant improvements, including new state of the art results in the Arcade Learning Environment. Unlike the selective attention models used in previous attempts, which constrain the attention via preconceived notions of importance, our implementation utilises the Markovian properties inherent in the state input. Our method produces a faithful visualisation of the policy, focusing on the behaviour of the agent. Our experiments demonstrate that the trained policies use multiple simultaneous foci of attention, and are able to modulate attention over time to deal with situations of partial observability.

READ FULL TEXT

page 10

page 11

page 12

page 13

research
11/30/2018

Deep Multi-Agent Reinforcement Learning with Relevance Graphs

Over recent years, deep reinforcement learning has shown strong successe...
research
04/23/2022

Grad-SAM: Explaining Transformers via Gradient Self-Attention Maps

Transformer-based language models significantly advanced the state-of-th...
research
07/09/2021

Attend2Pack: Bin Packing through Deep Reinforcement Learning with Attention

This paper seeks to tackle the bin packing problem (BPP) through a learn...
research
07/08/2020

Self-Supervised Policy Adaptation during Deployment

In most real world scenarios, a policy trained by reinforcement learning...
research
05/27/2022

FedFormer: Contextual Federation with Attention in Reinforcement Learning

A core issue in federated reinforcement learning is defining how to aggr...
research
04/28/2017

Mapping Instructions and Visual Observations to Actions with Reinforcement Learning

We propose to directly map raw visual observations and text input to act...
research
03/18/2020

Neuroevolution of Self-Interpretable Agents

Inattentional blindness is the psychological phenomenon that causes one ...

Please sign up or login with your details

Forgot password? Click here to reset