Improving Sample Efficiency of Value Based Models Using Attention and Vision Transformers

by   Amir Ardalan Kalantari, et al.
McGill University

Much of recent Deep Reinforcement Learning success is owed to the neural architecture's potential to learn and use effective internal representations of the world. While many current algorithms access a simulator to train with a large amount of data, in realistic settings, including while playing games that may be played against people, collecting experience can be quite costly. In this paper, we introduce a deep reinforcement learning architecture whose purpose is to increase sample efficiency without sacrificing performance. We design this architecture by incorporating advances achieved in recent years in the field of Natural Language Processing and Computer Vision. Specifically, we propose a visually attentive model that uses transformers to learn a self-attention mechanism on the feature maps of the state representation, while simultaneously optimizing return. We demonstrate empirically that this architecture improves sample complexity for several Atari environments, while also achieving better performance in some of the games.


page 1

page 2

page 3

page 4


Deep Reinforcement Learning with Swin Transformer

Transformers are neural network models that utilize multiple layers of s...

Armour: Generalizable Compact Self-Attention for Vision Transformers

Attention-based transformer networks have demonstrated promising potenti...

Transformers are Meta-Reinforcement Learners

The transformer architecture and variants presented remarkable success a...

Pretraining the Vision Transformer using self-supervised methods for vision based Deep Reinforcement Learning

The Vision Transformer architecture has shown to be competitive in the c...

Leveraging Transformers for StarCraft Macromanagement Prediction

Inspired by the recent success of transformers in natural language proce...

An Initial Attempt of Combining Visual Selective Attention with Deep Reinforcement Learning

Visual attention serves as a means of feature selection mechanism in the...

A 23 MW data centre is all you need

The field of machine learning has achieved striking progress in recent y...

Please sign up or login with your details

Forgot password? Click here to reset