Lower Bounds on the Rate of Convergence for Accept-Reject-Based Markov Chains
To avoid poor empirical performance in Metropolis-Hastings and other accept-reject-based algorithms practitioners often tune them by trial and error. Lower bounds on the convergence rate are developed in both total variation and Wasserstein distances in order to identify how the simulations will fail so these settings can be avoided, providing guidance on tuning. Particular attention is paid to using the lower bounds to study the convergence complexity of accept-reject-based Markov chains and to constrain the rate of convergence for geometrically ergodic Markov chains. The theory is applied in several settings. For example, if the target density concentrates with a parameter n (e.g. posterior concentration, Laplace approximations), it is demonstrated that the convergence rate of a Metropolis-Hastings chain can tend to 1 exponentially fast if the tuning parameters do not depend carefully on n. This is demonstrated with Bayesian logistic regression with Zellner's g-prior when the dimension and sample increase in such a way that size d/n →γ∈ (0, 1) and flat prior Bayesian logistic regression as n →∞.
READ FULL TEXT