Shortformer: Better Language Modeling using Shorter Inputs

12/31/2020
by   Ofir Press, et al.
0

We explore the benefits of decreasing the input length of transformers. First, we show that initially training the model on short subsequences, before moving on to longer ones, both reduces overall training time and, surprisingly, gives a large improvement in perplexity. We then show how to improve the efficiency of recurrence methods in transformers, which let models condition on previously processed tokens (when generating sequences that are larger than the maximal length that the transformer can handle at once). Existing methods require computationally expensive relative position embeddings; we introduce a simple alternative of adding absolute position embeddings to queries and keys instead of to word embeddings, which efficiently produces superior results. By combining these techniques, we increase training speed by 65 nine times faster, and substantially improve perplexity on WikiText-103, without adding any parameters.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset