DeepAI AI Chat
Log In Sign Up

Ensemble-based Uncertainty Quantification: Bayesian versus Credal Inference

by   Mohammad Hossein Shaker, et al.
Universität München
Universität Paderborn

The idea to distinguish and quantify two important types of uncertainty, often referred to as aleatoric and epistemic, has received increasing attention in machine learning research in the last couple of years. In this paper, we consider ensemble-based approaches to uncertainty quantification. Distinguishing between different types of uncertainty-aware learning algorithms, we specifically focus on Bayesian methods and approaches based on so-called credal sets, which naturally suggest themselves from an ensemble learning point of view. For both approaches, we address the question of how to quantify aleatoric and epistemic uncertainty. The effectiveness of corresponding measures is evaluated and compared in an empirical study on classification with a reject option.


page 1

page 2

page 3

page 4


Quantifying Epistemic Uncertainty in Deep Learning

Uncertainty quantification is at the core of the reliability and robustn...

Imprecise Bayesian Neural Networks

Uncertainty quantification and robustness to distribution shifts are imp...

Is the Volume of a Credal Set a Good Measure for Epistemic Uncertainty?

Adequate uncertainty representation and quantification have become imper...

Bayesian Deep Learning Hyperparameter Search for Robust Function Mapping to Polynomials with Noise

Advances in neural architecture search, as well as explainability and in...

Density-aware NeRF Ensembles: Quantifying Predictive Uncertainty in Neural Radiance Fields

We show that ensembling effectively quantifies model uncertainty in Neur...

Fully Bayesian VIB-DeepSSM

Statistical shape modeling (SSM) enables population-based quantitative a...