DeepAI AI Chat
Log In Sign Up

Toward Optimal Probabilistic Active Learning Using a Bayesian Approach

by   Daniel Kottke, et al.

Gathering labeled data to train well-performing machine learning models is one of the critical challenges in many applications. Active learning aims at reducing the labeling costs by an efficient and effective allocation of costly labeling resources. In this article, we propose a decision-theoretic selection strategy that (1) directly optimizes the gain in misclassification error, and (2) uses a Bayesian approach by introducing a conjugate prior distribution to determine the class posterior to deal with uncertainties. By reformulating existing selection strategies within our proposed model, we can explain which aspects are not covered in current state-of-the-art and why this leads to the superior performance of our approach. Extensive experiments on a large variety of datasets and different kernels validate our claims.


page 6

page 8

page 9

page 17


Frugal Reinforcement-based Active Learning

Most of the existing learning models, particularly deep neural networks,...

Parting with Illusions about Deep Active Learning

Active learning aims to reduce the high labeling cost involved in traini...

ImitAL: Learning Active Learning Strategies from Synthetic Data

One of the biggest challenges that complicates applied supervised machin...

Learning From Less Data: A Unified Data Subset Selection and Active Learning Framework for Computer Vision

Supervised machine learning based state-of-the-art computer vision techn...

Active Bayesian Assessment for Black-Box Classifiers

Recent advances in machine learning have led to increased deployment of ...

Probabilistic Active Learning for Active Class Selection

In machine learning, active class selection (ACS) algorithms aim to acti...

Introducing Bayesian Analysis with m&m's^: an active-learning exercise for undergraduates

We present an active-learning strategy for undergraduates that applies B...