Attention that does not Explain Away

09/29/2020
by   Nan Ding, et al.
0

Models based on the Transformer architecture have achieved better accuracy than the ones based on competing architectures for a large set of tasks. A unique feature of the Transformer is its universal application of a self-attention mechanism, which allows for free information flow at arbitrary distances. Following a probabilistic view of the attention via the Gaussian mixture model, we find empirical evidence that the Transformer attention tends to "explain away" certain input neurons. To compensate for this, we propose a doubly-normalized attention scheme that is simple to implement and provides theoretical guarantees for avoiding the "explaining away" effect without introducing significant computational or memory cost. Empirically, we show that the new attention schemes result in improved performance on several well-known benchmarks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/24/2022

Attention Enables Zero Approximation Error

Deep learning models have been widely applied in various aspects of dail...
research
09/15/2022

Beat Transformer: Demixed Beat and Downbeat Tracking with Dilated Self-Attention

We propose Beat Transformer, a novel Transformer encoder architecture fo...
research
06/12/2019

A Multiscale Visualization of Attention in the Transformer Model

The Transformer is a sequence model that forgoes traditional recurrent a...
research
12/10/2021

Couplformer:Rethinking Vision Transformer with Coupling Attention Map

With the development of the self-attention mechanism, the Transformer mo...
research
07/20/2022

AiATrack: Attention in Attention for Transformer Visual Tracking

Transformer trackers have achieved impressive advancements recently, whe...
research
01/30/2022

Fast Monte-Carlo Approximation of the Attention Mechanism

We introduce Monte-Carlo Attention (MCA), a randomized approximation met...

Please sign up or login with your details

Forgot password? Click here to reset