DeepAI AI Chat
Log In Sign Up

β-Cores: Robust Large-Scale Bayesian Data Summarization in the Presence of Outliers

by   Dionysis Manousakas, et al.
University of Cambridge

Modern machine learning applications should be able to address the intrinsic challenges arising over inference on massive real-world datasets, including scalability and robustness to outliers. Despite the multiple benefits of Bayesian methods (such as uncertainty-aware predictions, incorporation of experts knowledge, and hierarchical modeling), the quality of classic Bayesian inference depends critically on whether observations conform with the assumed data generating model, which is impossible to guarantee in practice. In this work, we propose a variational inference method that, in a principled way, can simultaneously scale to large datasets, and robustify the inferred posterior with respect to the existence of outliers in the observed data. Reformulating Bayes theorem via the β-divergence, we posit a robustified pseudo-Bayesian posterior as the target of inference. Moreover, relying on the recent formulations of Riemannian coresets for scalable Bayesian inference, we propose a sparse variational approximation of the robustified posterior and an efficient stochastic black-box algorithm to construct it. Overall our method allows releasing cleansed data summaries that can be applied broadly in scenarios including structured data corruption. We illustrate the applicability of our approach in diverse simulated and real datasets, and various statistical models, including Gaussian mean inference, logistic and neural linear regression, demonstrating its superiority to existing Bayesian summarization methods in the presence of outliers.


page 1

page 2

page 3

page 4


Variational Inference based on Robust Divergences

Robustness to outliers is a central issue in real-world machine learning...

Black-box Coreset Variational Inference

Recent advances in coreset methods have shown that a selection of repres...

Robustness of Bayesian ordinal response model against outliers via divergence approach

Ordinal response model is a popular and commonly used regression for ord...

Using bagged posteriors for robust inference and model criticism

Standard Bayesian inference is known to be sensitive to model misspecifi...

Median of Means Principle for Bayesian Inference

The topic of robustness is experiencing a resurgence of interest in the ...

Bayesian Inference for Evidence Accumulation Models with Regressors

Evidence accumulation models (EAMs) are an important class of cognitive ...

Code Repositories


Coresets for scalable robust pseudo-Bayesian inference

view repo