Scaling Local Self-Attention For Parameter Efficient Visual Backbones

03/23/2021
by   Ashish Vaswani, et al.
0

Self-attention has the promise of improving computer vision systems due to parameter-independent scaling of receptive fields and content-dependent interactions, in contrast to parameter-dependent scaling and content-independent interactions of convolutions. Self-attention models have recently been shown to have encouraging improvements on accuracy-parameter trade-offs compared to baseline convolutional models such as ResNet-50. In this work, we aim to develop self-attention models that can outperform not just the canonical baseline models, but even the high-performing convolutional models. We propose two extensions to self-attention that, in conjunction with a more efficient implementation of self-attention, improve the speed, memory usage, and accuracy of these models. We leverage these improvements to develop a new self-attention model family, HaloNets, which reach state-of-the-art accuracies on the parameter-limited setting of the ImageNet classification benchmark. In preliminary transfer learning experiments, we find that HaloNet models outperform much larger models and have better inference performance. On harder tasks such as object detection and instance segmentation, our simple local self-attention and convolutional hybrids show improvements over very strong baselines. These results mark another step in demonstrating the efficacy of self-attention models on settings traditionally dominated by convolutional models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/01/2022

Rethinking Query-Key Pairwise Interactions in Vision Transformers

Vision Transformers have achieved state-of-the-art performance in many v...
research
06/13/2019

Stand-Alone Self-Attention in Vision Models

Convolutions are a fundamental building block of modern computer vision ...
research
02/17/2021

LambdaNetworks: Modeling Long-Range Interactions Without Attention

We present lambda layers – an alternative framework to self-attention – ...
research
07/12/2021

Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Context Terms

Self-Attention has become prevalent in computer vision models. Inspired ...
research
01/07/2021

Self-Attention Based Context-Aware 3D Object Detection

Most existing point-cloud based 3D object detectors use convolution-like...
research
08/30/2022

MRL: Learning to Mix with Attention and Convolutions

In this paper, we present a new neural architectural block for the visio...
research
05/31/2021

Choose a Transformer: Fourier or Galerkin

In this paper, we apply the self-attention from the state-of-the-art Tra...

Please sign up or login with your details

Forgot password? Click here to reset