Effective Theory of Transformers at Initialization

04/04/2023
by   Emily Dinan, et al.
0

We perform an effective-theory analysis of forward-backward signal propagation in wide and deep Transformers, i.e., residual neural networks with multi-head self-attention blocks and multilayer perceptron blocks. This analysis suggests particular width scalings of initialization and training hyperparameters for these models. We then take up such suggestions, training Vision and Language Transformers in practical setups.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset