DeepAI AI Chat
Log In Sign Up

Deterministic Approximate EM Algorithm; Application to the Riemann Approximation EM and the Tempered EM

by   Thomas Lartigue, et al.

The Expectation Maximisation (EM) algorithm is widely used to optimise non-convex likelihood functions with hidden variables. Many authors modified its simple design to fit more specific situations. For instance the Expectation (E) step has been replaced by Monte Carlo (MC) approximations, Markov Chain Monte Carlo approximations, tempered approximations... Most of the well studied approximations belong to the stochastic class. By comparison, the literature is lacking when it comes to deterministic approximations. In this paper, we introduce a theoretical framework, with state of the art convergence guarantees, for any deterministic approximation of the E step. We analyse theoretically and empirically several approximations that fit into this framework. First, for cases with intractable E steps, we introduce a deterministic alternative to the MC-EM, using Riemann sums. This method is easy to implement and does not require the tuning of hyper-parameters. Then, we consider the tempered approximation, borrowed from the Simulated Annealing optimisation technique and meant to improve the EM solution. We prove that the the tempered EM verifies the convergence guarantees for a wide range of temperature profiles. We showcase empirically how it is able to escape adversarial initialisations. Finally, we combine the Riemann and tempered approximations to accomplish both their purposes.


page 1

page 2

page 3

page 4


On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems

The Stochastic Approximation EM (SAEM) algorithm, a variant stochastic a...

A Variational Expectation-Maximisation Algorithm for Learning Jump Markov Linear Systems

Jump Markov linear systems (JMLS) are a useful class which can be used t...

An Online Expectation-Maximisation Algorithm for Nonnegative Matrix Factorisation Models

In this paper we formulate the nonnegative matrix factorisation (NMF) pr...

Analytical Probability Distributions and EM-Learning for Deep Generative Networks

Deep Generative Networks (DGNs) with probabilistic modeling of their out...

Learning Boltzmann Machine with EM-like Method

We propose an expectation-maximization-like(EMlike) method to train Bolt...

Adiabatic Persistent Contrastive Divergence Learning

This paper studies the problem of parameter learning in probabilistic gr...