Maximum likelihood estimation of regularisation parameters in high-dimensional inverse problems: an empirical Bayesian approach. Part II: Theoretical Analysis

by   Valentin De Bortoli, et al.

This paper presents a detailed theoretical analysis of the three stochastic approximation proximal gradient algorithms proposed in our companion paper [49] to set regularization parameters by marginal maximum likelihood estimation. We prove the convergence of a more general stochastic approximation scheme that includes the three algorithms of [49] as special cases. This includes asymptotic and non-asymptotic convergence results with natural and easily verifiable conditions, as well as explicit bounds on the convergence rates. Importantly, the theory is also general in that it can be applied to other intractable optimisation problems. A main novelty of the work is that the stochastic gradient estimates of our scheme are constructed from inexact proximal Markov chain Monte Carlo samplers. This allows the use of samplers that scale efficiently to large problems and for which we have precise theoretical guarantees.


page 1

page 2

page 3

page 4


Maximum Approximated Likelihood Estimation

Empirical economic research frequently applies maximum likelihood estima...

Asymptotic Bias of Stochastic Gradient Search

The asymptotic behavior of the stochastic gradient algorithm with a bias...

Stochastic Gradient Annealed Importance Sampling for Efficient Online Marginal Likelihood Estimation

We consider estimating the marginal likelihood in settings with independ...

Stochastic approximation algorithms for superquantiles estimation

This paper is devoted to two different two-time-scale stochastic approxi...

Adversarial Numerical Analysis for Inverse Problems

Many scientific and engineering applications are formulated as inverse p...

Please sign up or login with your details

Forgot password? Click here to reset