SGD Through the Lens of Kolmogorov Complexity

by   Gregory Schwartzman, et al.

We prove that stochastic gradient descent (SGD) finds a solution that achieves (1-ϵ) classification accuracy on the entire dataset. We do so under two main assumptions: (1. Local progress) There is consistent improvement of the model accuracy over batches. (2. Models compute simple functions) The function computed by the model is simple (has low Kolmogorov complexity). Intuitively, the above means that local progress of SGD implies global progress. Assumption 2 trivially holds for underparameterized models, hence, our work gives the first convergence guarantee for general, underparameterized models. Furthermore, this is the first result which is completely model agnostic - we don't require the model to have any specific architecture or activation function, it may not even be a neural network. Our analysis makes use of the entropy compression method, which was first introduced by Moser and Tardos in the context of the Lovász local lemma.


page 1

page 2

page 3

page 4


Convergence of stochastic gradient descent schemes for Lojasiewicz-landscapes

In this article, we consider convergence of stochastic gradient descent ...

Convergence of stochastic gradient descent under a local Lajasiewicz condition for deep neural networks

We extend the global convergence result of Chatterjee <cit.> by consider...

Global Convergence and Stability of Stochastic Gradient Descent

In machine learning, stochastic gradient descent (SGD) is widely deploye...

Local SGD Optimizes Overparameterized Neural Networks in Polynomial Time

In this paper we prove that Local (S)GD (or FedAvg) can optimize two-lay...

From Gradient Flow on Population Loss to Learning with Stochastic Gradient Descent

Stochastic Gradient Descent (SGD) has been the method of choice for lear...

Privacy-Preserving Deep Learning for any Activation Function

This paper considers the scenario that multiple data owners wish to appl...

SGD learning on neural networks: leap complexity and saddle-to-saddle dynamics

We investigate the time complexity of SGD learning on fully-connected ne...

Please sign up or login with your details

Forgot password? Click here to reset