On the Theoretical Properties of Noise Correlation in Stochastic Optimization

09/19/2022
by   Aurelien Lucchi, et al.
0

Studying the properties of stochastic noise to optimize complex non-convex functions has been an active area of research in the field of machine learning. Prior work has shown that the noise of stochastic gradient descent improves optimization by overcoming undesirable obstacles in the landscape. Moreover, injecting artificial Gaussian noise has become a popular idea to quickly escape saddle points. Indeed, in the absence of reliable gradient information, the noise is used to explore the landscape, but it is unclear what type of noise is optimal in terms of exploration ability. In order to narrow this gap in our knowledge, we study a general type of continuous-time non-Markovian process, based on fractional Brownian motion, that allows for the increments of the process to be correlated. This generalizes processes based on Brownian motion, such as the Ornstein-Uhlenbeck process. We demonstrate how to discretize such processes which gives rise to the new algorithm fPGD. This method is a generalization of the known algorithms PGD and Anti-PGD. We study the properties of fPGD both theoretically and empirically, demonstrating that it possesses exploration abilities that, in some cases, are favorable over PGD and Anti-PGD. These results open the field to novel ways to exploit noise for training machine learning models.

READ FULL TEXT
research
02/06/2022

Anticorrelated Noise Injection for Improved Generalization

Injecting artificial noise into gradient descent (GD) is commonly employ...
research
06/04/2021

Stochastic gradient descent with noise of machine learning type. Part II: Continuous time analysis

The representation of functions by artificial neural networks depends on...
research
12/07/2021

A Continuous-time Stochastic Gradient Descent Method for Continuous Data

Optimization problems with continuous data appear in, e.g., robust machi...
research
06/25/2020

Taming neural networks with TUSLA: Non-convex learning via adaptive stochastic gradient Langevin algorithms

Artificial neural networks (ANNs) are typically highly nonlinear systems...
research
01/22/2019

Non-Asymptotic Analysis of Fractional Langevin Monte Carlo for Non-Convex Optimization

Recent studies on diffusion-based sampling methods have shown that Lange...
research
02/20/2019

Active Probabilistic Inference on Matrices for Pre-Conditioning in Stochastic Optimization

Pre-conditioning is a well-known concept that can significantly improve ...
research
10/21/2021

Analyzing and Improving the Optimization Landscape of Noise-Contrastive Estimation

Noise-contrastive estimation (NCE) is a statistically consistent method ...

Please sign up or login with your details

Forgot password? Click here to reset