Stochastic Mirror Descent in Average Ensemble Models

10/27/2022
by   Taylan Kargin, et al.
0

The stochastic mirror descent (SMD) algorithm is a general class of training algorithms, which includes the celebrated stochastic gradient descent (SGD), as a special case. It utilizes a mirror potential to influence the implicit bias of the training algorithm. In this paper we explore the performance of the SMD iterates on mean-field ensemble models. Our results generalize earlier ones obtained for SGD on such models. The evolution of the distribution of parameters is mapped to a continuous time process in the space of probability distributions. Our main result gives a nonlinear partial differential equation to which the continuous time process converges in the asymptotic regime of large networks. The impact of the mirror potential appears through a multiplicative term that is equal to the inverse of its Hessian and which can be interpreted as defining a gradient flow over an appropriately defined Riemannian manifold. We provide numerical simulations which allow us to study and characterize the effect of the mirror potential on the performance of networks trained with SMD for some binary classification problems.

READ FULL TEXT
research
02/14/2022

Continuous-time stochastic gradient descent for optimizing over the stationary distribution of stochastic differential equations

We develop a new continuous-time stochastic gradient descent method for ...
research
12/14/2021

Non Asymptotic Bounds for Optimization via Online Multiplicative Stochastic Gradient Descent

The gradient noise of Stochastic Gradient Descent (SGD) is considered to...
research
02/16/2019

Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit

We consider learning two layer neural networks using stochastic gradient...
research
07/13/2020

Quantitative Propagation of Chaos for SGD in Wide Neural Networks

In this paper, we investigate the limiting behavior of a continuous-time...
research
10/13/2022

From Gradient Flow on Population Loss to Learning with Stochastic Gradient Descent

Stochastic Gradient Descent (SGD) has been the method of choice for lear...
research
02/13/2021

Asymmetric Heavy Tails and Implicit Bias in Gaussian Noise Injections

Gaussian noise injections (GNIs) are a family of simple and widely-used ...
research
06/21/2012

Convergence of the Continuous Time Trajectories of Isotropic Evolution Strategies on Monotonic C^2-composite Functions

The Information-Geometric Optimization (IGO) has been introduced as a un...

Please sign up or login with your details

Forgot password? Click here to reset