Transformer Language Models without Positional Encodings Still Learn Positional Information

03/30/2022
by   Adi Haviv, et al.
0

Transformers typically require some form of positional encoding, such as positional embeddings, to process natural language sequences. Surprisingly, we find that transformer language models without any explicit positional encoding are still competitive with standard models, and that this phenomenon is robust across different datasets, model sizes, and sequence lengths. Probing experiments reveal that such models acquire an implicit notion of absolute positions throughout the network, effectively compensating for the missing information. We conjecture that causal attention enables the model to infer the number of predecessors that each token can attend to, thereby approximating its absolute position.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset