Shortformer: Better Language Modeling using Shorter Inputs

12/31/2020
by   Ofir Press, et al.
0

We explore the benefits of decreasing the input length of transformers. First, we show that initially training the model on short subsequences, before moving on to longer ones, both reduces overall training time and, surprisingly, gives a large improvement in perplexity. We then show how to improve the efficiency of recurrence methods in transformers, which let models condition on previously processed tokens (when generating sequences that are larger than the maximal length that the transformer can handle at once). Existing methods require computationally expensive relative position embeddings; we introduce a simple alternative of adding absolute position embeddings to queries and keys instead of to word embeddings, which efficiently produces superior results. By combining these techniques, we increase training speed by 65 nine times faster, and substantially improve perplexity on WikiText-103, without adding any parameters.

READ FULL TEXT
research
12/20/2022

A Length-Extrapolatable Transformer

Position modeling plays a critical role in Transformers. In this paper, ...
research
08/27/2021

Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation

Since the introduction of the transformer model by Vaswani et al. (2017)...
research
05/31/2023

The Impact of Positional Encoding on Length Generalization in Transformers

Length generalization, the ability to generalize from small training con...
research
06/06/2021

CAPE: Encoding Relative Positions with Continuous Augmented Positional Embeddings

Without positional information, attention-based transformer neural netwo...
research
01/09/2019

Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context

Transformer networks have a potential of learning longer-term dependency...
research
09/28/2020

Improve Transformer Models with Better Relative Position Embeddings

Transformer architectures rely on explicit position encodings in order t...
research
09/13/2021

SHAPE: Shifted Absolute Position Embedding for Transformers

Position representation is crucial for building position-aware represent...

Please sign up or login with your details

Forgot password? Click here to reset