DeepAI AI Chat
Log In Sign Up

Prediction-Oriented Bayesian Active Learning

by   Freddie Bickford Smith, et al.

Information-theoretic approaches to active learning have traditionally focused on maximising the information gathered about the model parameters, most commonly by optimising the BALD score. We highlight that this can be suboptimal from the perspective of predictive performance. For example, BALD lacks a notion of an input distribution and so is prone to prioritise data of limited relevance. To address this we propose the expected predictive information gain (EPIG), an acquisition function that measures information gain in the space of predictions rather than parameters. We find that using EPIG leads to stronger predictive performance compared with BALD across a range of datasets and models, and thus provides an appealing drop-in replacement.


Unifying Approaches in Data Subset Selection via Fisher Information and Information-Theoretic Quantities

The mutual information between predictions and model parameters – also r...

Characterizing the robustness of Bayesian adaptive experimental designs to active learning bias

Bayesian adaptive experimental design is a form of active learning, whic...

Bayesian Active Learning for Classification and Preference Learning

Information theoretic active learning has been widely studied for probab...

BABA: Beta Approximation for Bayesian Active Learning

This paper introduces a new acquisition function under the Bayesian acti...

On Discarding, Caching, and Recalling Samples in Active Learning

We address challenges of active learning under scarce informational reso...

Active Learning for Regression with Aggregated Outputs

Due to the privacy protection or the difficulty of data collection, we c...

Explaining Predictive Uncertainty with Information Theoretic Shapley Values

Researchers in explainable artificial intelligence have developed numero...

Code Repositories


Prediction-oriented Bayesian active learning (AISTATS 2023)

view repo