The Practicality of Stochastic Optimization in Imaging Inverse Problems

10/22/2019
by   Junqi Tang, et al.
38

In this work we investigate the practicality of stochastic gradient descent and recently introduced variants with variance-reduction techniques in imaging inverse problems. Such algorithms have been shown in the machine learning literature to have optimal complexities in theory, and provide great improvement empirically over the deterministic gradient methods. Surprisingly, in some tasks such as image deblurring, many of such methods fail to converge faster than the accelerated deterministic gradient methods, even in terms of epoch counts. We investigate this phenomenon and propose a theory-inspired mechanism to characterize whether an inverse problem should be preferred to be solved by stochastic optimization techniques. We derive conditions on the structure of the inverse problem for being a suitable application of stochastic gradient methods, using standard tools in numerical linear algebra. Based on our analysis, we provide the practitioners convenient ways to examine whether they should use stochastic gradient methods or the classical deterministic gradient methods to solve a given inverse problem. Our results also provide guidance on choosing appropriately the partition minibatch schemes. Finally, we propose an accelerated primal-dual SGD algorithm in order to tackle another key bottleneck of stochastic optimization which is the heavy computation of proximal operators. The proposed method has fast convergence rate in practice, and is able to efficiently handle non-smooth regularization terms which are coupled with linear operators.

READ FULL TEXT

page 6

page 23

page 26

page 27

research
08/10/2021

An Analysis of Stochastic Variance Reduced Gradient for Linear Inverse Problems

Stochastic variance reduced gradient (SVRG) is a popular variance reduct...
research
06/20/2020

A Fast Stochastic Plug-and-Play ADMM for Imaging Inverse Problems

In this work we propose an efficient stochastic plug-and-play (PnP) algo...
research
02/05/2016

Exploiting the Structure: Stochastic Gradient Methods Using Raw Clusters

The amount of data available in the world is growing faster than our abi...
research
10/21/2020

On the Saturation Phenomenon of Stochastic Gradient Descent for Linear Inverse Problems

Stochastic gradient descent (SGD) is a promising method for solving larg...
research
08/31/2022

Accelerating Deep Unrolling Networks via Dimensionality Reduction

In this work we propose a new paradigm for designing efficient deep unro...
research
02/20/2019

Active Probabilistic Inference on Matrices for Pre-Conditioning in Stochastic Optimization

Pre-conditioning is a well-known concept that can significantly improve ...
research
05/03/2020

On the Convergence Rate of Projected Gradient Descent for a Back-Projection based Objective

Ill-posed linear inverse problems appear in many fields of imaging scien...

Please sign up or login with your details

Forgot password? Click here to reset