Linear Self-Attention Approximation via Trainable Feedforward Kernel

11/08/2022
by   Uladzislau Yorsh, et al.
0

In pursuit of faster computation, Efficient Transformers demonstrate an impressive variety of approaches – models attaining sub-quadratic attention complexity can utilize a notion of sparsity or a low-rank approximation of inputs to reduce the number of attended keys; other ways to reduce complexity include locality-sensitive hashing, key pooling, additional memory to store information in compacted or hybridization with other architectures, such as CNN. Often based on a strong mathematical basis, kernelized approaches allow for the approximation of attention with linear complexity while retaining high accuracy. Therefore, in the present paper, we aim to expand the idea of trainable kernel methods to approximate the self-attention mechanism of the Transformer architecture.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset