DeepAI AI Chat
Log In Sign Up

On the Discrepancy Principle for Stochastic Gradient Descent

by   Tim Jahn, et al.
IG Farben Haus

Stochastic gradient descent (SGD) is a promising numerical method for solving large-scale inverse problems. However, its theoretical properties remain largely underexplored in the lens of classical regularization theory. In this note, we study the classical discrepancy principle, one of the most popular a posteriori choice rules, as the stopping criterion for SGD, and prove the finite iteration termination property and the convergence of the iterate in probability as the noise level tends to zero. The theoretical results are complemented with extensive numerical experiments.


page 1

page 2

page 3

page 4


An Analysis of Stochastic Variance Reduced Gradient for Linear Inverse Problems

Stochastic variance reduced gradient (SVRG) is a popular variance reduct...

A termination criterion for stochastic gradient descent for binary classification

We propose a new, simple, and computationally inexpensive termination te...

Local optimisation of Nyström samples through stochastic gradient descent

We study a relaxed version of the column-sampling problem for the Nyströ...

On the Saturation Phenomenon of Stochastic Gradient Descent for Linear Inverse Problems

Stochastic gradient descent (SGD) is a promising method for solving larg...

Stochastic gradient descent for linear least squares problems with partially observed data

We propose a novel stochastic gradient descent method for solving linear...

Stochastic Gradient Descent in Hilbert Scales: Smoothness, Preconditioning and Earlier Stopping

Stochastic Gradient Descent (SGD) has become the method of choice for so...

On Maximum-a-Posteriori estimation with Plug Play priors and stochastic gradient descent

Bayesian methods to solve imaging inverse problems usually combine an ex...