Principles of Bayesian Inference using General Divergence Criteria

02/26/2018
by   Jack Jewson, et al.
0

When it is acknowledged that all candidate parameterised statistical models are misspecified relative to the data generating process, the decision maker (DM) must currently concern themselves with inference for the parameter value minimising the KL-divergence between the model and the process (Walker, 2013). However, it has long been known that minimising the KL-divergence places a large weight on correctly capturing the tails of the sample distribution. As a result the DM is required to worry about the robustness of their model to tail misspecifications if they want to conduct principled inference. In this paper we alleviate these concerns for the DM. We advance recent methodological developments in general Bayesian updating (Bissiri, Holmes and Walker, 2016) to propose a statistically well principled Bayesian updating of beliefs targeting the minimisation of more general divergence criteria. We improve both the motivation and the statistical foundations of existing Bayesian minimum divergence estimation (Hooker and Vidyashankar, 2014; Ghosh and Basu, 2016), allowing the well principled Bayesian to target predictions from the model that are close to the genuine model in terms of some alternative divergence measure to the KL-divergence. Our principled formulation allows us to consider a broader range of divergences than have previously been considered. In fact we argue defining the divergence measure forms an important, subjective part of any statistical analysis, and aim to provide some decision theoretic rational for this selection. We illustrate how targeting alternative divergence measures can impact the conclusions of simple inference tasks, and discuss then how our methods might apply to more complicated, high dimensional models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/26/2018

Principled Bayesian Minimum Divergence Inference

When it is acknowledged that all candidate parameterised statistical mod...
research
10/12/2022

On Divergence Measures for Bayesian Pseudocoresets

A Bayesian pseudocoreset is a small synthetic dataset for which the post...
research
08/24/2019

Relation between the Kantorovich-Wasserstein metric and the Kullback-Leibler divergence

We discuss a relation between the Kantorovich-Wasserstein (KW) metric an...
research
08/01/2020

Bayesian-Assisted Inference from Visualized Data

A Bayesian view of data interpretation suggests that a visualization use...
research
01/06/2023

Divergence vs. Decision P-values: A Distinction Worth Making in Theory and Keeping in Practice

There are two distinct definitions of 'P-value' for evaluating a propose...
research
05/23/2022

RL with KL penalties is better viewed as Bayesian inference

Reinforcement learning (RL) is frequently employed in fine-tuning large ...
research
11/11/2021

Causal KL: Evaluating Causal Discovery

The two most commonly used criteria for assessing causal model discovery...

Please sign up or login with your details

Forgot password? Click here to reset