Processing Long Legal Documents with Pre-trained Transformers: Modding LegalBERT and Longformer

11/02/2022
by   Dimitris Mamakas, et al.
0

Pre-trained Transformers currently dominate most NLP tasks. They impose, however, limits on the maximum input length (512 sub-words in BERT), which are too restrictive in the legal domain. Even sparse-attention models, such as Longformer and BigBird, which increase the maximum input length to 4,096 sub-words, severely truncate texts in three of the six datasets of LexGLUE. Simpler linear classifiers with TF-IDF features can handle texts of any length, require far less resources to train and deploy, but are usually outperformed by pre-trained Transformers. We explore two directions to cope with long legal texts: (i) modifying a Longformer warm-started from LegalBERT to handle even longer texts (up to 8,192 sub-words), and (ii) modifying LegalBERT to use TF-IDF representations. The first approach is the best in terms of performance, surpassing a hierarchical version of LegalBERT, which was the previous state of the art in LexGLUE. The second approach leads to computationally more efficient models at the expense of lower performance, but the resulting models still outperform overall a linear SVM with TF-IDF features in long legal document classification.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/13/2022

Pre-training Transformers on Indian Legal Text

Natural Language Processing in the legal domain been benefited hugely by...
research
05/09/2021

Lawformer: A Pre-trained Language Model for Chinese Legal Long Documents

Legal artificial intelligence (LegalAI) aims to benefit legal systems wi...
research
09/14/2021

Legal Transformer Models May Not Always Help

Deep learning-based Natural Language Processing methods, especially tran...
research
12/14/2019

Long-length Legal Document Classification

One of the principal tasks of machine learning with major applications i...
research
10/11/2022

An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification

Non-hierarchical sparse attention Transformer-based models, such as Long...
research
05/06/2023

Rhetorical Role Labeling of Legal Documents using Transformers and Graph Neural Networks

A legal document is usually long and dense requiring human effort to par...
research
07/05/2021

Experiments with adversarial attacks on text genres

Neural models based on pre-trained transformers, such as BERT or XLM-RoB...

Please sign up or login with your details

Forgot password? Click here to reset