Properties of the Stochastic Approximation EM Algorithm with Mini-batch Sampling

07/22/2019
by   Estelle Kuhn, et al.
8

To speed up convergence a mini-batch version of the Monte Carlo Markov Chain Stochastic Approximation Expectation Maximization (MCMC-SAEM) algorithm for general latent variable models is proposed. For exponential models the algorithm is shown to be convergent under classical conditions as the number of iterations increases. Numerical experiments illustrate the performance of the mini-batch algorithm in various models. In particular, we highlight that an appropriate choice of the mini-batch size results in a tremendous speed-up of the convergence of the sequence of estimators generated by the algorithm. Moreover, insights on the effect of the mini-batch size on the limit distribution are presented.

READ FULL TEXT

page 12

page 13

research
02/09/2019

Mini-batch learning of exponential family finite mixture models

Mini-batch algorithms have become increasingly popular due to the requir...
research
07/31/2017

Mini-batch Tempered MCMC

In this paper we propose a general framework of performing MCMC with onl...
research
04/02/2023

Mini-batch k-means terminates within O(d/ε) iterations

We answer the question: "Does local progress (on batches) imply global p...
research
07/26/2019

Unsupervised Learning Framework of Interest Point Via Properties Optimization

This paper presents an entirely unsupervised interest point training fra...
research
12/19/2017

Mining Smart Card Data for Travelers' Mini Activities

In the context of public transport modeling and simulation, we address t...
research
02/09/2016

Nested Mini-Batch K-Means

A new algorithm is proposed which accelerates the mini-batch k-means alg...
research
10/28/2019

On the Global Convergence of (Fast) Incremental Expectation Maximization Methods

The EM algorithm is one of the most popular algorithm for inference in l...

Please sign up or login with your details

Forgot password? Click here to reset