Convergence Analysis of Riemannian Stochastic Approximation Schemes

05/27/2020
by   Alain Durmus, et al.
0

This paper analyzes the convergence for a large class of Riemannian stochastic approximation (SA) schemes, which aim at tackling stochastic optimization problems. In particular, the recursions we study use either the exponential map of the considered manifold (geodesic schemes) or more general retraction functions (retraction schemes) used as a proxy for the exponential map. Such approximations are of great interest since they are low complexity alternatives to geodesic schemes. Under the assumption that the mean field of the SA is correlated with the gradient of a smooth Lyapunov function (possibly non-convex), we show that the above Riemannian SA schemes find an O(b_∞ + log n / √(n))-stationary point (in expectation) within O(n) iterations, where b_∞≥ 0 is the asymptotic bias. Compared to previous works, the conditions we derive are considerably milder. First, all our analysis are global as we do not assume iterates to be a-priori bounded. Second, we study biased SA schemes. To be more specific, we consider the case where the mean-field function can only be estimated up to a small bias, and/or the case in which the samples are drawn from a controlled Markov chain. Third, the conditions on retractions required to ensure convergence of the related SA schemes are weak and hold for well-known examples. We illustrate our results on three machine learning problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/02/2019

Non-asymptotic Analysis of Biased Stochastic Approximation Scheme

Stochastic approximation (SA) is a key method used in statistical learni...
research
11/10/2018

R-SPIDER: A Fast Riemannian Stochastic Optimization Algorithm with Curvature Independent Rate

We study smooth stochastic optimization problems on Riemannian manifolds...
research
06/14/2022

The Dynamics of Riemannian Robbins-Monro Algorithms

Many important learning algorithms, such as stochastic gradient methods,...
research
10/09/2019

Nonconvex stochastic optimization on manifolds via Riemannian Frank-Wolfe methods

We study stochastic projection-free methods for constrained optimization...
research
09/01/2022

Analysis and Numerical Approximation of Stationary Second-Order Mean Field Game Partial Differential Inclusions

The formulation of Mean Field Games (MFG) typically requires continuous ...
research
10/11/2018

A Riemannian-Stein Kernel Method

This paper presents a theoretical analysis of numerical integration base...
research
12/10/2011

Asynchronous Stochastic Approximation with Differential Inclusions

The asymptotic pseudo-trajectory approach to stochastic approximation of...

Please sign up or login with your details

Forgot password? Click here to reset