DeepAI AI Chat
Log In Sign Up

On the Discrepancy Principle for Stochastic Gradient Descent

04/30/2020
by   Tim Jahn, et al.
IG Farben Haus
UCL
0

Stochastic gradient descent (SGD) is a promising numerical method for solving large-scale inverse problems. However, its theoretical properties remain largely underexplored in the lens of classical regularization theory. In this note, we study the classical discrepancy principle, one of the most popular a posteriori choice rules, as the stopping criterion for SGD, and prove the finite iteration termination property and the convergence of the iterate in probability as the noise level tends to zero. The theoretical results are complemented with extensive numerical experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

08/10/2021

An Analysis of Stochastic Variance Reduced Gradient for Linear Inverse Problems

Stochastic variance reduced gradient (SVRG) is a popular variance reduct...
03/23/2020

A termination criterion for stochastic gradient descent for binary classification

We propose a new, simple, and computationally inexpensive termination te...
03/24/2022

Local optimisation of Nyström samples through stochastic gradient descent

We study a relaxed version of the column-sampling problem for the Nyströ...
10/21/2020

On the Saturation Phenomenon of Stochastic Gradient Descent for Linear Inverse Problems

Stochastic gradient descent (SGD) is a promising method for solving larg...
07/09/2020

Stochastic gradient descent for linear least squares problems with partially observed data

We propose a novel stochastic gradient descent method for solving linear...
06/18/2020

Stochastic Gradient Descent in Hilbert Scales: Smoothness, Preconditioning and Earlier Stopping

Stochastic Gradient Descent (SGD) has become the method of choice for so...
01/16/2022

On Maximum-a-Posteriori estimation with Plug Play priors and stochastic gradient descent

Bayesian methods to solve imaging inverse problems usually combine an ex...