Translating predictive distributions into informative priors

03/15/2023
by   Andrew A. Manderson, et al.
0

When complex Bayesian models exhibit implausible behaviour, one solution is to assemble available information into an informative prior. Challenges arise as prior information is often only available for the observable quantity, or some model-derived marginal quantity, rather than directly pertaining to the natural parameters in our model. We propose a method for translating available prior information, in the form of an elicited distribution for the observable or model-derived marginal quantity, into an informative joint prior. Our approach proceeds given a parametric class of prior distributions with as yet undetermined hyperparameters, and minimises the difference between the supplied elicited distribution and corresponding prior predictive distribution. We employ a global, multi-stage Bayesian optimisation procedure to locate optimal values for the hyperparameters. Three examples illustrate our approach: a nonlinear regression model; a setting in which prior information pertains to R^2 – a model-derived quantity; and a cure-fraction survival model, where censoring implies that the observable quantity is a priori a mixed discrete/continuous quantity.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset