A Bayesian Multiple Testing Paradigm for Model Selection in Inverse Regression Problems

07/15/2020
by   Debashis Chatterjee, et al.
0

In this article, we propose a novel Bayesian multiple testing formulation for model and variable selection in inverse setups, judiciously embedding the idea of inverse reference distributions proposed by Bhattacharya (2013) in a mixture framework consisting of the competing models. We develop the theory and methods in the general context encompassing parametric and nonparametric competing models, dependent data, as well as misspecifications. Our investigation shows that asymptotically the multiple testing procedure almost surely selects the best possible inverse model that minimizes the minimum Kullback-Leibler divergence from the true model. We also show that the error rates, namely, versions of the false discovery rate and the false non-discovery rate converge to zero almost surely as the sample size goes to infinity. Asymptotic α-control of versions of the false discovery rate and its impact on the convergence of false non-discovery rate versions, are also investigated. Our simulation experiments involve small sample based selection among inverse Poisson log regression and inverse geometric logit and probit regression, where the regressions are either linear or based on Gaussian processes. Additionally, variable selection is also considered. Our multiple testing results turn out to be very encouraging in the sense of selecting the best models in all the non-misspecified and misspecified cases.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset