Betti numbers of attention graphs is all you really need

07/05/2022
by   Laida Kushnareva, et al.
0

We apply methods of topological analysis to the attention graphs, calculated on the attention heads of the BERT model ( arXiv:1810.04805v2 ). Our research shows that the classifier built upon basic persistent topological features (namely, Betti numbers) of the trained neural network can achieve classification results on par with the conventional classification method. We show the relevance of such topological text representation on three text classification benchmarks. For the best of our knowledge, it is the first attempt to analyze the topology of an attention-based neural network, widely used for Natural Language Processing.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/30/2022

The Topological BERT: Transforming Attention into Topology for Natural Language Processing

In recent years, the introduction of the Transformer models sparked a re...
08/22/2023

Uncertainty Estimation of Transformers' Predictions via Topological Analysis of the Attention Matrices

Determining the degree of confidence of deep learning model in its predi...
09/13/2022

SkIn: Skimming-Intensive Long-Text Classification Using BERT for Medical Corpus

BERT is a widely used pre-trained model in natural language processing. ...
11/19/2020

On the Dynamics of Training Attention Models

The attention mechanism has been widely used in deep neural networks as ...
06/23/2021

Fast Linking Numbers for Topology Verification of Loopy Structures

It is increasingly common to model, simulate, and process complex materi...
05/19/2022

Acceptability Judgements via Examining the Topology of Attention Maps

The role of the attention mechanism in encoding linguistic knowledge has...

Please sign up or login with your details

Forgot password? Click here to reset