Data Mining in Clinical Trial Text: Transformers for Classification and Question Answering Tasks

by   Lena Schmidt, et al.

This research on data extraction methods applies recent advances in natural language processing to evidence synthesis based on medical texts. Texts of interest include abstracts of clinical trials in English and in multilingual contexts. The main focus is on information characterized via the Population, Intervention, Comparator, and Outcome (PICO) framework, but data extraction is not limited to these fields. Recent neural network architectures based on transformers show capacities for transfer learning and increased performance on downstream natural language processing tasks such as universal reading comprehension, brought forward by this architecture's use of contextualized word embeddings and self-attention mechanisms. This paper contributes to solving problems related to ambiguity in PICO sentence prediction tasks, as well as highlighting how annotations for training named entity recognition systems are used to train a high-performing, but nevertheless flexible architecture for question answering in systematic review automation. Additionally, it demonstrates how the problem of insufficient amounts of training annotations for PICO entity extraction is tackled by augmentation. All models in this paper were created with the aim to support systematic review (semi)automation. They achieve high F1 scores, and demonstrate the feasibility of applying transformer-based classification methods to support data mining in the biomedical literature.


A Comparative Study of Pretrained Language Models for Long Clinical Text

Objective: Clinical knowledge enriched transformer models (e.g., Clinica...

Clinical-Longformer and Clinical-BigBird: Transformers for long clinical sequences

Transformers-based models, such as BERT, have dramatically improved the ...

UMLS-KGI-BERT: Data-Centric Knowledge Integration in Transformers for Biomedical Entity Recognition

Pre-trained transformer language models (LMs) have in recent years becom...

Neural Skill Transfer from Supervised Language Tasks to Reading Comprehension

Reading comprehension is a challenging task in natural language processi...

Do CoNLL-2003 Named Entity Taggers Still Work Well in 2023?

Named Entity Recognition (NER) is an important and well-studied task in ...

Calibrating Structured Output Predictors for Natural Language Processing

We address the problem of calibrating prediction confidence for output e...

Please sign up or login with your details

Forgot password? Click here to reset