Analyzing Transformers in Embedding Space

09/06/2022
by   Guy Dar, et al.
10

Understanding Transformer-based models has attracted significant attention, as they lie at the heart of recent technological advances across machine learning. While most interpretability methods rely on running models over inputs, recent work has shown that a zero-pass approach, where parameters are interpreted directly without a forward/backward pass is feasible for some Transformer parameters, and for two-layer attention networks. In this work, we present a theoretical analysis where all parameters of a trained Transformer are interpreted by projecting them into the embedding space, that is, the space of vocabulary items they operate on. We derive a simple theoretical framework to support our arguments and provide ample evidence for its validity. First, an empirical analysis showing that parameters of both pretrained and fine-tuned models can be interpreted in embedding space. Second, we present two applications of our framework: (a) aligning the parameters of different models that share a vocabulary, and (b) constructing a classifier without training by “translating” the parameters of a fine-tuned classifier to parameters of a different model that was only pretrained. Overall, our findings open the door to interpretation methods that, at least in part, abstract away from model specifics and operate in the embedding space only.

READ FULL TEXT

page 8

page 15

page 16

research
09/27/2021

On Isotropy Calibration of Transformers

Different studies of the embedding space of transformer models suggest t...
research
06/07/2022

How to Dissect a Muppet: The Structure of Transformer Embedding Spaces

Pretrained embeddings based on the Transformer architecture have taken t...
research
08/21/2023

Analyzing Transformer Dynamics as Movement through Embedding Space

Transformer language models exhibit intelligent behaviors such as unders...
research
10/14/2022

HashFormers: Towards Vocabulary-independent Pre-trained Transformers

Transformer-based pre-trained language models are vocabulary-dependent, ...
research
10/14/2022

Holistic Sentence Embeddings for Better Out-of-Distribution Detection

Detecting out-of-distribution (OOD) instances is significant for the saf...
research
01/19/2023

AtMan: Understanding Transformer Predictions Through Memory Efficient Attention Manipulation

Generative transformer models have become increasingly complex, with lar...
research
11/30/2022

Task-Specific Embeddings for Ante-Hoc Explainable Text Classification

Current state-of-the-art approaches to text classification typically lev...

Please sign up or login with your details

Forgot password? Click here to reset