DIBS: Diversity inducing Information Bottleneck in Model Ensembles

03/10/2020
by   Samarth Sinha, et al.
7

Although deep learning models have achieved state-of-the-art performance on a number of vision tasks, generalization over high dimensional multi-modal data, and reliable predictive uncertainty estimation are still active areas of research. Bayesian approaches including Bayesian Neural Nets (BNNs) do not scale well to modern computer vision tasks, as they are difficult to train, and have poor generalization under dataset-shift. This motivates the need for effective ensembles which can generalize and give reliable uncertainty estimates. In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction. We explicitly optimize a diversity inducing adversarial loss for learning the stochastic latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data. We evaluate our method on benchmark datasets: MNIST, CIFAR100, TinyImageNet and MIT Places 2, and compared to the most competitive baselines show significant improvements in classification accuracy, under a shift in the data distribution and in out-of-distribution detection. Code will be released in this url https://github.com/rvl-lab-utoronto/dibs

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/15/2020

Neural Ensemble Search for Performant and Calibrated Predictions

Ensembles of neural networks achieve superior performance compared to st...
research
06/18/2019

Maximizing Overall Diversity for Improved Uncertainty Estimates in Deep Ensembles

The inaccuracy of neural network models on inputs that do not stem from ...
research
02/22/2018

Diversity regularization in deep ensembles

Calibrating the confidence of supervised learning models is important fo...
research
04/21/2021

Uncertainty-Aware Boosted Ensembling in Multi-Modal Settings

Reliability of machine learning (ML) systems is crucial in safety-critic...
research
12/23/2022

Benchmark for Uncertainty Robustness in Self-Supervised Learning

Self-Supervised Learning (SSL) is crucial for real-world applications, e...
research
03/14/2023

Window-Based Early-Exit Cascades for Uncertainty Estimation: When Deep Ensembles are More Efficient than Single Models

Deep Ensembles are a simple, reliable, and effective method of improving...
research
07/20/2022

Latent Discriminant deterministic Uncertainty

Predictive uncertainty estimation is essential for deploying Deep Neural...

Please sign up or login with your details

Forgot password? Click here to reset