Parallel tempering as a mechanism for facilitating inference in hierarchical hidden Markov models

11/19/2020
by   Giada Sacchi, et al.
0

The study of animal behavioural states inferred through hidden Markov models and similar state switching models has seen a significant increase in popularity in recent years. The ability to account for varying levels of behavioural scale has become possible through hierarchical hidden Markov models, but additional levels lead to higher complexity and increased correlation between model components. Maximum likelihood approaches to inference using the EM algorithm and direct optimisation of likelihoods are more frequently used, with Bayesian approaches being less favoured due to computational demands. Given these demands, it is vital that efficient estimation algorithms are developed when Bayesian methods are preferred. We study the use of various approaches to improve convergence times and mixing in Markov chain Monte Carlo methods applied to hierarchical hidden Markov models, including parallel tempering as an inference facilitation mechanism. The method shows promise for analysing complex stochastic models with high levels of correlation between components, but our results show that it requires careful tuning in order to maximise that potential.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset