Stochastic gradient descent for linear inverse problems in variable exponent Lebesgue spaces

03/16/2023
by   Marta Lazzaretti, et al.
0

We consider a stochastic gradient descent (SGD) algorithm for solving linear inverse problems (e.g., CT image reconstruction) in the Banach space framework of variable exponent Lebesgue spaces ℓ^(p_n)(ℝ). Such non-standard spaces have been recently proved to be the appropriate functional framework to enforce pixel-adaptive regularisation in signal and image processing applications. Compared to its use in Hilbert settings, however, the application of SGD in the Banach setting of ℓ^(p_n)(ℝ) is not straightforward, due, in particular to the lack of a closed-form expression and the non-separability property of the underlying norm. In this manuscript, we show that SGD iterations can effectively be performed using the associated modular function. Numerical validation on both simulated and real CT data show significant improvements in comparison to SGD solutions both in Hilbert and other Banach settings, in particular when non-Gaussian or mixed noise is observed in the data.

READ FULL TEXT

page 10

page 12

research
02/10/2023

On the Convergence of Stochastic Gradient Descent for Linear Inverse Problems in Banach Spaces

In this work we consider stochastic gradient descent (SGD) for solving l...
research
09/29/2022

Statistical Learning and Inverse Problems: An Stochastic Gradient Approach

Inverse problems are paramount in Science and Engineering. In this paper...
research
06/18/2020

Stochastic Gradient Descent in Hilbert Scales: Smoothness, Preconditioning and Earlier Stopping

Stochastic Gradient Descent (SGD) has become the method of choice for so...
research
12/10/2021

Modular-proximal gradient algorithms in variable exponent Lebesgue spaces

We consider structured optimisation problems defined in terms of the sum...
research
01/17/2020

Chebyshev Inertial Landweber Algorithm for Linear Inverse Problems

The Landweber algorithm defined on complex/real Hilbert spaces is a grad...
research
01/16/2022

On Maximum-a-Posteriori estimation with Plug Play priors and stochastic gradient descent

Bayesian methods to solve imaging inverse problems usually combine an ex...
research
01/27/2022

Benchmarking learned non-Cartesian k-space trajectories and reconstruction networks

We benchmark the current existing methods to jointly learn non-Cartesian...

Please sign up or login with your details

Forgot password? Click here to reset