Active Learning with Weak Labels for Gaussian Processes

by   Amanda Olmin, et al.

Annotating data for supervised learning can be costly. When the annotation budget is limited, active learning can be used to select and annotate those observations that are likely to give the most gain in model performance. We propose an active learning algorithm that, in addition to selecting which observation to annotate, selects the precision of the annotation that is acquired. Assuming that annotations with low precision are cheaper to obtain, this allows the model to explore a larger part of the input space, with the same annotation costs. We build our acquisition function on the previously proposed BALD objective for Gaussian Processes, and empirically demonstrate the gains of being able to adjust the annotation precision in the active learning loop.


page 1

page 2

page 3

page 4


Bayesian active learning for choice models with deep Gaussian processes

In this paper, we propose an active learning algorithm and models which ...

Active Learning for Coreference Resolution using Discrete Annotation

We improve upon pairwise annotation for active learning in coreference r...

Privacy-preserving Active Learning on Sensitive Data for User Intent Classification

Active learning holds promise of significantly reducing data annotation ...

Active Testing: Sample-Efficient Model Evaluation

We introduce active testing: a new framework for sample-efficient model ...

Active Learning with Gaussian Processes for High Throughput Phenotyping

A looming question that must be solved before robotic plant phenotyping ...

Optimizing Active Learning for Low Annotation Budgets

When we can not assume a large amount of annotated data , active learnin...

OPAD: An Optimized Policy-based Active Learning Framework for Document Content Analysis

Documents are central to many business systems, and include forms, repor...

Please sign up or login with your details

Forgot password? Click here to reset