DeepAI AI Chat
Log In Sign Up

Maximum likelihood estimation of regularisation parameters in high-dimensional inverse problems: an empirical Bayesian approach

by   Ana F. Vidal, et al.

Many imaging problems require solving an inverse problem that is ill-conditioned or ill-posed. Imaging methods typically address this difficulty by regularising the estimation problem to make it well-posed. This often requires setting the value of the so-called regularisation parameters that control the amount of regularisation enforced. These parameters are notoriously difficult to set a priori, and can have a dramatic impact on the recovered estimates. In this paper, we propose a general empirical Bayesian method for setting regularisation parameters in imaging problems that are convex w.r.t. the unknown image. Our method calibrates regularisation parameters directly from the observed data by maximum marginal likelihood estimation, and can simultaneously estimate multiple regularisation parameters. A main novelty is that this maximum marginal likelihood estimation problem is efficiently solved by using a stochastic proximal gradient algorithm that is driven by two proximal Markov chain Monte Carlo samplers. Furthermore, the proposed algorithm uses the same basic operators as proximal optimisation algorithms, namely gradient and proximal operators, and it is therefore straightforward to apply to problems that are currently solved by using proximal optimisation techniques. We also present a detailed theoretical analysis of the proposed methodology, including asymptotic and non-asymptotic convergence results with easily verifiable conditions, and explicit bounds on the convergence rates. The proposed methodology is demonstrated with a range of experiments and comparisons with alternative approaches from the literature. The considered experiments include image denoising, non-blind image deconvolution, and hyperspectral unmixing, using synthesis and analysis priors involving the L1, total-variation, total-variation and L1, and total-generalised-variation pseudo-norms.


page 4

page 18

page 20

page 21

page 23

page 25

page 27

page 28


The split Gibbs sampler revisited: improvements to its algorithmic structure and augmented target distribution

This paper proposes a new accelerated proximal Markov chain Monte Carlo ...

Bayesian imaging using Plug Play priors: when Langevin meets Tweedie

Since the seminal work of Venkatakrishnan et al. (2013), Plug Play (...

High-dimensional Bayesian model selection by proximal nested sampling

Imaging methods often rely on Bayesian statistical inference strategies ...

Fast unsupervised Bayesian image segmentation with adaptive spatial regularisation

This paper presents a new Bayesian estimation technique for hidden Potts...