Bayesian selective inference: sampling models and non-informative priors

08/11/2020
by   Daniel G. Rasines, et al.
0

We discuss Bayesian inference for parameters selected using the data. We argue that, in general, an adjustment for selection is necessary in order to achieve approximate repeated-sampling validity, and discuss two issues that emerge from such adjustment. The first one concerns a potential ambiguity in the choice of posterior distribution. The second one concerns the choice of non-informative prior densities that lead to well-calibrated posterior inferences. We show that non-informative priors that are independent of the sample size tend to overstate regions of the parameter space with low selection probability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/13/2019

Bayesian automated posterior repartitioning for nested sampling

Priors in Bayesian analyses often encode informative domain knowledge th...
research
01/03/2021

Meta-Learning Conjugate Priors for Few-Shot Bayesian Optimization

Bayesian Optimization is methodology used in statistical modelling that ...
research
10/13/2020

Bayesian inference under small sample size – A noninformative prior approach

A Bayesian inference method for problems with small samples and sparse d...
research
03/31/2023

Transform-scaled process priors for trait allocations in Bayesian nonparametrics

Completely random measures (CRMs) provide a broad class of priors, argua...
research
05/29/2017

Bayesian stochastic blockmodeling

This chapter provides a self-contained introduction to the use of Bayesi...
research
11/20/2019

Assessment and adjustment of approximate inference algorithms using the law of total variance

A common method for assessing validity of Bayesian sampling or approxima...
research
12/14/2021

Methods for Eliciting Informative Prior Distributions

Eliciting informative prior distributions for Bayesian inference can oft...

Please sign up or login with your details

Forgot password? Click here to reset