Long-Range Transformer Architectures for Document Understanding

09/11/2023
by   Thibault Douzon, et al.
0

Since their release, Transformers have revolutionized many fields from Natural Language Understanding to Computer Vision. Document Understanding (DU) was not left behind with first Transformer based models for DU dating from late 2019. However, the computational complexity of the self-attention operation limits their capabilities to small sequences. In this paper we explore multiple strategies to apply Transformer based models to long multi-page documents. We introduce 2 new multi-modal (text + layout) long-range models for DU. They are based on efficient implementations of Transformers for long sequences. Long-range models can process whole documents at once effectively and are less impaired by the document's length. We compare them to LayoutLM, a classical Transformer adapted for DU and pre-trained on millions of documents. We further propose 2D relative attention bias to guide self-attention towards relevant tokens without harming model efficiency. We observe improvements on multi-page business documents on Information Retrieval for a small performance cost on smaller sequences. Relative 2D attention revealed to be effective on dense text for both normal and long-range models.

READ FULL TEXT

page 6

page 11

research
04/14/2022

Revisiting Transformer-based Models for Long Document Classification

The recent literature in text classification is biased towards short tex...
research
03/15/2022

Long Document Summarization with Top-down and Bottom-up Inference

Text summarization aims to condense long documents and retain key inform...
research
05/11/2020

Local Self-Attention over Long Text for Efficient Document Retrieval

Neural networks, particularly Transformer-based architectures, have achi...
research
03/17/2023

CoLT5: Faster Long-Range Transformers with Conditional Computation

Many natural language processing tasks benefit from long inputs, but pro...
research
09/10/2021

Query-driven Segment Selection for Ranking Long Documents

Transformer-based rankers have shown state-of-the-art performance. Howev...
research
09/02/2022

Extend and Explain: Interpreting Very Long Language Models

While Transformer language models (LMs) are state-of-the-art for informa...
research
03/24/2023

MSdocTr-Lite: A Lite Transformer for Full Page Multi-script Handwriting Recognition

The Transformer has quickly become the dominant architecture for various...

Please sign up or login with your details

Forgot password? Click here to reset