Staged Training for Transformer Language Models

03/11/2022
by   Sheng Shen, et al.
8

The current standard approach to scaling transformer language models trains each model size from a different random initialization. As an alternative, we consider a staged training setup that begins with a small model and incrementally increases the amount of compute used for training by applying a "growth operator" to increase the model depth and width. By initializing each stage with the output of the previous one, the training process effectively re-uses the compute from prior stages and becomes more efficient. Our growth operators each take as input the entire training state (including model parameters, optimizer state, learning rate schedule, etc.) and output a new training state from which training continues. We identify two important properties of these growth operators, namely that they preserve both the loss and the "training dynamics" after applying the operator. While the loss-preserving property has been discussed previously, to the best of our knowledge this work is the first to identify the importance of preserving the training dynamics (the rate of decrease of the loss during training). To find the optimal schedule for stages, we use the scaling laws from (Kaplan et al., 2020) to find a precise schedule that gives the most compute saving by starting a new stage when training efficiency starts decreasing. We empirically validate our growth operators and staged training for autoregressive language models, showing up to 22 scratch. Our code is available at https://github.com/allenai/staged-training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/04/2023

2x Faster Language Model Pre-training via Masked Structural Growth

Acceleration of large language model pre-training is a critical issue in...
research
05/25/2023

Scaling Data-Constrained Language Models

The current trend of scaling language models involves increasing both pa...
research
03/29/2022

Training Compute-Optimal Large Language Models

We investigate the optimal model size and number of tokens for training ...
research
02/05/2022

Adaptive Fine-Tuning of Transformer-Based Language Models for Named Entity Recognition

The current standard approach for fine-tuning transformer-based language...
research
08/11/2023

Composable Function-preserving Expansions for Transformer Architectures

Training state-of-the-art neural networks requires a high cost in terms ...
research
03/02/2023

Learning to Grow Pretrained Models for Efficient Transformer Training

Scaling transformers has led to significant breakthroughs in many domain...
research
04/09/2021

On Two-Stage Guessing

Stationary memoryless sources produce two correlated random sequences X^...

Please sign up or login with your details

Forgot password? Click here to reset