XCiT: Cross-Covariance Image Transformers

06/17/2021
by   Alaaeldin El-Nouby, et al.
0

Following their success in natural language processing, transformers have recently shown much promise for computer vision. The self-attention operation underlying transformers yields global interactions between all tokens ,i.e. words or image patches, and enables flexible modelling of image data beyond the local interactions of convolutions. This flexibility, however, comes with a quadratic complexity in time and memory, hindering application to long sequences and high-resolution images. We propose a "transposed" version of self-attention that operates across feature channels rather than tokens, where the interactions are based on the cross-covariance matrix between keys and queries. The resulting cross-covariance attention (XCA) has linear complexity in the number of tokens, and allows efficient processing of high-resolution images. Our cross-covariance image transformer (XCiT) is built upon XCA. It combines the accuracy of conventional transformers with the scalability of convolutional architectures. We validate the effectiveness and generality of XCiT by reporting excellent results on multiple vision benchmarks, including image classification and self-supervised feature learning on ImageNet-1k, object detection and instance segmentation on COCO, and semantic segmentation on ADE20k.

READ FULL TEXT

page 6

page 18

research
07/01/2021

Focal Self-attention for Local-Global Interactions in Vision Transformers

Recently, Vision Transformer and its variants have shown great promise o...
research
12/15/2022

Full Contextual Attention for Multi-resolution Transformers in Semantic Segmentation

Transformers have proved to be very effective for visual recognition tas...
research
05/11/2023

Cascaded Cross-Attention Networks for Data-Efficient Whole-Slide Image Classification Using Transformers

Whole-Slide Imaging allows for the capturing and digitization of high-re...
research
06/07/2022

Signal Propagation in Transformers: Theoretical Perspectives and the Role of Rank Collapse

Transformers have achieved remarkable success in several domains, rangin...
research
03/22/2023

Multiscale Attention via Wavelet Neural Operators for Vision Transformers

Transformers have achieved widespread success in computer vision. At the...
research
06/01/2022

Transformer with Fourier Integral Attentions

Multi-head attention empowers the recent success of transformers, the st...
research
09/15/2022

Hydra Attention: Efficient Attention with Many Heads

While transformers have begun to dominate many tasks in vision, applying...

Please sign up or login with your details

Forgot password? Click here to reset