Investigating the efficiency of marginalising over discrete parameters in Bayesian computations

04/13/2022
by   Wen Zhang, et al.
0

Bayesian analysis methods often use some form of iterative simulation such as Monte Carlo computation. Models that involve discrete variables can sometime pose a challenge, either because the methods used do not support such variables (e.g. Hamiltonian Monte Carlo) or because the presence of such variables can slow down the computation. A common workaround is to marginalise the discrete variables out of the model. While it is reasonable to expect that such marginalisation would also lead to more time-efficient computations, to our knowledge this has not been demonstrated beyond a few specialised models. We explored the impact of marginalisation on the computational efficiency for a few simple statistical models. Specifically, we considered two- and three-component Gaussian mixture models, and also the Dawid-Skene model for categorical ratings. We explored each with two software implementations of Markov chain Monte Carlo techniques: JAGS and Stan. We directly compared marginalised and non-marginalised versions of the same model using the samplers on the same software. Our results show that marginalisation on its own does not necessarily boost performance. Nevertheless, the best performance was usually achieved with Stan, which requires marginalisation. We conclude that there is no simple answer to whether or not marginalisation is helpful. It is not necessarily the case that, when turned 'on', this technique can be assured to provide computational benefit independent of other factors, nor is it likely to be the model component that has the largest impact on computational efficiency.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset