BABA: Beta Approximation for Bayesian Active Learning

by   Jae Oh Woo, et al.

This paper introduces a new acquisition function under the Bayesian active learning framework, namely BABA. It is motivated by previously well-established works BALD, and BatchBALD which capture the mutual information between the model parameters and the predictive outputs of the data. Our proposed measure, BABA, endeavors to quantify the normalized mutual information by approximating the stochasticity of predictive probabilities using Beta distributions. BABA outperforms the well-known family of acquisition functions, including BALD and BatchBALD. We demonstrate this by showing extensive experimental results obtained from MNIST and EMNIST datasets.


page 1

page 2

page 3

page 4


BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning

We develop BatchBALD, a tractable approximation to the mutual informatio...

Active Learning for Regression with Aggregated Outputs

Due to the privacy protection or the difficulty of data collection, we c...

Scalable Batch Acquisition for Deep Bayesian Active Learning

In deep active learning, it is especially important to choose multiple e...

Analytic Mutual Information in Bayesian Neural Networks

Bayesian neural networks have successfully designed and optimized a robu...

Prediction-Oriented Bayesian Active Learning

Information-theoretic approaches to active learning have traditionally f...

Active Learning by Query by Committee with Robust Divergences

Active learning is a widely used methodology for various problems with h...

Please sign up or login with your details

Forgot password? Click here to reset