Calibration procedures for approximate Bayesian credible sets

10/15/2018
by   Jeong Eun Lee, et al.
0

We develop and apply two calibration procedures for checking the coverage of approximate Bayesian credible sets including intervals estimated using Monte Carlo methods. The user has an ideal prior and likelihood, but generates a credible set for an approximate posterior which is not proportional to the product of ideal likelihood and prior. We estimate the realised posterior coverage achieved by the approximate credible set. This is the coverage of the unknown "true" parameter if the data are a realisation of the user's ideal observation model conditioned on the parameter, and the parameter is a draw from the user's ideal prior. In one approach we estimate the posterior coverage at the data by making a semi-parametric logistic regression of binary coverage outcomes on simulated data against summary statistics evaluated on simulated data. In another we use Importance Sampling (IS) from the approximate posterior, windowing simulated data to fall close to the observed data. We give a Bayes Factor measuring the evidence for the realised posterior coverage to be below a user specified threshold. We illustrate our methods on four examples.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset