Are Transformers with One Layer Self-Attention Using Low-Rank Weight Matrices Universal Approximators?

07/26/2023
by   Tokio Kajitsuka, et al.
0

Existing analyses of the expressive capacity of Transformer models have required excessively deep layers for data memorization, leading to a discrepancy with the Transformers actually used in practice. This is primarily due to the interpretation of the softmax function as an approximation of the hardmax function. By clarifying the connection between the softmax function and the Boltzmann operator, we prove that a single layer of self-attention with low-rank weight matrices possesses the capability to perfectly capture the context of an entire input sequence. As a consequence, we show that single-layer Transformer has a memorization capacity for finite samples, and that Transformers consisting of one self-attention layer with two feed-forward neural networks are universal approximators for continuous functions on a compact domain.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/20/2019

Are Transformers universal approximators of sequence-to-sequence functions?

Despite the widespread adoption of Transformer models for NLP tasks, the...
research
04/14/2023

Optimal inference of a generalised Potts model by single-layer transformers with factored attention

Transformers are the type of neural networks that has revolutionised nat...
research
05/27/2022

Transformers from an Optimization Perspective

Deep learning models such as the Transformer are often constructed by he...
research
02/04/2023

Greedy Ordering of Layer Weight Matrices in Transformers Improves Translation

Prior work has attempted to understand the internal structures and funct...
research
05/30/2023

Universality and Limitations of Prompt Tuning

Despite the demonstrated empirical efficacy of prompt tuning to adapt a ...
research
10/16/2022

Scratching Visual Transformer's Back with Uniform Attention

The favorable performance of Vision Transformers (ViTs) is often attribu...
research
11/14/2022

BiViT: Extremely Compressed Binary Vision Transformer

Model binarization can significantly compress model size, reduce energy ...

Please sign up or login with your details

Forgot password? Click here to reset