Maximum likelihood estimation of regularisation parameters in high-dimensional inverse problems: an empirical Bayesian approach

11/26/2019
by   Ana F. Vidal, et al.
0

Many imaging problems require solving an inverse problem that is ill-conditioned or ill-posed. Imaging methods typically address this difficulty by regularising the estimation problem to make it well-posed. This often requires setting the value of the so-called regularisation parameters that control the amount of regularisation enforced. These parameters are notoriously difficult to set a priori, and can have a dramatic impact on the recovered estimates. In this paper, we propose a general empirical Bayesian method for setting regularisation parameters in imaging problems that are convex w.r.t. the unknown image. Our method calibrates regularisation parameters directly from the observed data by maximum marginal likelihood estimation, and can simultaneously estimate multiple regularisation parameters. A main novelty is that this maximum marginal likelihood estimation problem is efficiently solved by using a stochastic proximal gradient algorithm that is driven by two proximal Markov chain Monte Carlo samplers. Furthermore, the proposed algorithm uses the same basic operators as proximal optimisation algorithms, namely gradient and proximal operators, and it is therefore straightforward to apply to problems that are currently solved by using proximal optimisation techniques. We also present a detailed theoretical analysis of the proposed methodology, including asymptotic and non-asymptotic convergence results with easily verifiable conditions, and explicit bounds on the convergence rates. The proposed methodology is demonstrated with a range of experiments and comparisons with alternative approaches from the literature. The considered experiments include image denoising, non-blind image deconvolution, and hyperspectral unmixing, using synthesis and analysis priors involving the L1, total-variation, total-variation and L1, and total-generalised-variation pseudo-norms.

READ FULL TEXT

page 4

page 18

page 20

page 21

page 23

page 25

page 27

page 28

research
03/08/2021

Bayesian imaging using Plug Play priors: when Langevin meets Tweedie

Since the seminal work of Venkatakrishnan et al. (2013), Plug Play (...
research
06/28/2022

The split Gibbs sampler revisited: improvements to its algorithmic structure and augmented target distribution

This paper proposes a new accelerated proximal Markov chain Monte Carlo ...
research
12/22/2015

FAASTA: A fast solver for total-variation regularization of ill-conditioned problems with application to brain imaging

The total variation (TV) penalty, as many other analysis-sparsity proble...
research
02/05/2015

Fast unsupervised Bayesian image segmentation with adaptive spatial regularisation

This paper presents a new Bayesian estimation technique for hidden Potts...
research
08/18/2023

Accelerated Bayesian imaging by relaxed proximal-point Langevin sampling

This paper presents a new accelerated proximal Markov chain Monte Carlo ...

Please sign up or login with your details

Forgot password? Click here to reset