ALE: A Simulation-Based Active Learning Evaluation Framework for the Parameter-Driven Comparison of Query Strategies for NLP

08/01/2023
by   Philipp Kohl, et al.
0

Supervised machine learning and deep learning require a large amount of labeled data, which data scientists obtain in a manual, and time-consuming annotation process. To mitigate this challenge, Active Learning (AL) proposes promising data points to annotators they annotate next instead of a subsequent or random sample. This method is supposed to save annotation effort while maintaining model performance. However, practitioners face many AL strategies for different tasks and need an empirical basis to choose between them. Surveys categorize AL strategies into taxonomies without performance indications. Presentations of novel AL strategies compare the performance to a small subset of strategies. Our contribution addresses the empirical basis by introducing a reproducible active learning evaluation (ALE) framework for the comparative evaluation of AL strategies in NLP. The framework allows the implementation of AL strategies with low effort and a fair data-driven comparison through defining and tracking experiment parameters (e.g., initial dataset size, number of data points per query step, and the budget). ALE helps practitioners to make more informed decisions, and researchers can focus on developing new, effective AL strategies and deriving best practices for specific use cases. With best practices, practitioners can lower their annotation costs. We present a case study to illustrate how to use the framework.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/11/2021

Active^2 Learning: Actively reducing redundancies in Active Learning methods for Sequence Tagging and Machine Translation

While deep learning is a powerful tool for natural language processing (...
research
01/09/2023

Active Learning for Abstractive Text Summarization

Construction of human-curated annotated datasets for abstractive text su...
research
09/02/2020

ALEX: Active Learning based Enhancement of a Model's Explainability

An active learning (AL) algorithm seeks to construct an effective classi...
research
06/19/2023

Taming Small-sample Bias in Low-budget Active Learning

Active learning (AL) aims to minimize the annotation cost by only queryi...
research
06/16/2023

ActiveGLAE: A Benchmark for Deep Active Learning with Transformers

Deep active learning (DAL) seeks to reduce annotation costs by enabling ...
research
07/01/2023

Revisiting Sample Size Determination in Natural Language Understanding

Knowing exactly how many data points need to be labeled to achieve a cer...
research
09/23/2021

A Survey on Cost Types, Interaction Schemes, and Annotator Performance Models in Selection Algorithms for Active Learning in Classification

Pool-based active learning (AL) aims to optimize the annotation process ...

Please sign up or login with your details

Forgot password? Click here to reset