Effective Theory of Transformers at Initialization

04/04/2023
by   Emily Dinan, et al.
0

We perform an effective-theory analysis of forward-backward signal propagation in wide and deep Transformers, i.e., residual neural networks with multi-head self-attention blocks and multilayer perceptron blocks. This analysis suggests particular width scalings of initialization and training hyperparameters for these models. We then take up such suggestions, training Vision and Language Transformers in practical setups.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/07/2022

Signal Propagation in Transformers: Theoretical Perspectives and the Role of Rank Collapse

Transformers have achieved remarkable success in several domains, rangin...
research
02/20/2023

Deep Transformers without Shortcuts: Modifying Self-attention for Faithful Signal Propagation

Skip connections and normalisation layers form two standard architectura...
research
03/25/2018

Neural Nets via Forward State Transformation and Backward Loss Transformation

This article studies (multilayer perceptron) neural networks with an emp...
research
06/12/2023

Transformers learn through gradual rank increase

We identify incremental learning dynamics in transformers, where the dif...
research
06/10/2019

Scaling Laws for the Principled Design, Initialization and Preconditioning of ReLU Networks

In this work, we describe a set of rules for the design and initializati...
research
04/28/2022

A Probabilistic Interpretation of Transformers

We propose a probabilistic interpretation of exponential dot product att...
research
02/16/2020

Robustness Verification for Transformers

Robustness verification that aims to formally certify the prediction beh...

Please sign up or login with your details

Forgot password? Click here to reset