A large deviation principle for the empirical measures of Metropolis-Hastings chains

04/05/2023
by   Federica Milinanni, et al.
0

To sample from a given target distribution, Markov chain Monte Carlo (MCMC) sampling relies on constructing an ergodic Markov chain with the target distribution as its invariant measure. For any MCMC method, an important question is how to evaluate its efficiency. One approach is to consider the associated empirical measure and how fast it converges to the stationary distribution of the underlying Markov process. Recently, this question has been considered from the perspective of large deviation theory, for different types of MCMC methods, including, e.g., non-reversible Metropolis-Hastings on a finite state space, non-reversible Langevin samplers, the zig-zag sampler, and parallell tempering. This approach, based on large deviations, has proven successful in analysing existing methods and designing new, efficient ones. However, for the Metropolis-Hastings algorithm on more general state spaces, the workhorse of MCMC sampling, the same techniques have not been available for analysing performance, as the underlying Markov chain dynamics violate the conditions used to prove existing large deviation results for empirical measures of a Markov chain. This also extends to methods built on the same idea as Metropolis-Hastings, such as the Metropolis-Adjusted Langevin Method or ABC-MCMC. In this paper, we take the first steps towards such a large-deviations based analysis of Metropolis-Hastings-like methods, by proving a large deviation principle for the the empirical measures of Metropolis-Hastings chains. In addition, we characterize the rate function and its properties in terms of the acceptance- and rejection-part of the Metropolis-Hastings dynamics.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/13/2022

Boost your favorite Markov Chain Monte Carlo sampler using Kac's theorem: the Kick-Kac teleportation algorithm

The present paper focuses on the problem of sampling from a given target...
research
12/28/2021

A Near-Optimal Finite Approximation Approach for Computing Stationary Distribution and Performance Measures of Continuous-State Markov Chains

Analysis and use of stochastic models represented by a discrete-time Mar...
research
12/29/2020

Metropolis-Hastings with Averaged Acceptance Ratios

Markov chain Monte Carlo (MCMC) methods to sample from a probability dis...
research
07/04/2023

Generative Flow Networks: a Markov Chain Perspective

While Markov chain Monte Carlo methods (MCMC) provide a general framewor...
research
12/07/2020

Sequential Stratified Regeneration: MCMC for Large State Spaces with an Application to Subgraph Counting Estimation

This work considers the general task of estimating the sum of a bounded ...
research
03/17/2014

A reversible infinite HMM using normalised random measures

We present a nonparametric prior over reversible Markov chains. We use c...
research
04/22/2017

Reversible Jump Metropolis Light Transport using Inverse Mappings

We study Markov Chain Monte Carlo (MCMC) methods operating in primary sa...

Please sign up or login with your details

Forgot password? Click here to reset