Revealing the Dark Secrets of BERT

08/21/2019
by   Olga Kovaleva, et al.
0

BERT-based architectures currently give state-of-the-art performance on many NLP tasks, but little is known about the exact mechanisms that contribute to its success. In the current work, we focus on the interpretation of self-attention, which is one of the fundamental underlying components of BERT. Using a subset of GLUE tasks and a set of handcrafted features-of-interest, we propose the methodology and carry out a qualitative and quantitative analysis of the information encoded by the individual BERT's heads. Our findings suggest that there is a limited set of attention patterns that are repeated across different heads, indicating the overall model overparametrization. While different heads consistently use the same attention patterns, they have varying impact on performance across different tasks. We show that manually disabling attention in certain heads leads to a performance improvement over the regular fine-tuned BERT models.

READ FULL TEXT

page 6

page 8

research
05/01/2020

When BERT Plays the Lottery, All Tickets Are Winning

Much of the recent success in NLP is due to the large Transformer-based ...
research
04/08/2020

Improving BERT with Self-Supervised Attention

One of the most popular paradigms of applying large, pre-trained NLP mod...
research
01/28/2020

PEL-BERT: A Joint Model for Protocol Entity Linking

Pre-trained models such as BERT are widely used in NLP tasks and are fin...
research
05/18/2021

Effective Attention Sheds Light On Interpretability

An attention matrix of a transformer self-attention sublayer can provabl...
research
02/19/2023

Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT

Recently, ChatGPT has attracted great attention, as it can generate flue...
research
11/02/2020

How Far Does BERT Look At:Distance-based Clustering and Analysis of BERT's Attention

Recent research on the multi-head attention mechanism, especially that i...
research
05/13/2022

A Study of the Attention Abnormality in Trojaned BERTs

Trojan attacks raise serious security concerns. In this paper, we invest...

Please sign up or login with your details

Forgot password? Click here to reset