On Generalization Error Bounds of Noisy Gradient Methods for Non-Convex Learning

02/02/2019
by   Jian Li, et al.
0

Generalization error (also known as the out-of-sample error) measures how well the hypothesis obtained from the training data can generalize to previously unseen data. Obtaining tight generalization error bounds is central to statistical learning theory. In this paper, we study the generalization error bound in learning general non-convex objectives, which has attracted significant attention in recent years. In particular, we study the (algorithm-dependent) generalization bounds of various iterative gradient based methods. (1) We present a very simple and elementary proof of a recent result for stochastic gradient Langevin dynamics (SGLD), due to Mou et al. (2018). Our proof can be easily extended to obtain similar generalization bounds for several other variants of SGLD (e.g., with postprocessing, momentum, mini-batch, acceleration, and more general noises), and improves upon the recent results in Pensia et al. (2018). (2) By incorporating ideas from the PAC-Bayesian theory into the stability framework, we obtain tighter distribution-dependent (or data-dependent) generalization bounds. Our bounds provide an intuitive explanation for the phenomenon reported in Zhang et al. (2017a). (3) We also study the setting where the total loss is the sum of a bounded loss and an additional `l2 regularization term. We obtain new generalization bounds for the continuous Langevin dynamic in this setting by leveraging the tool of Log-Sobolev inequality. Our new bounds are more desirable when the noisy level of the process is not small, and do not grow when T approaches to infinity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/09/2022

Stability Based Generalization Bounds for Exponential Family Langevin Dynamics

We study generalization bounds for noisy stochastic mini-batch iterative...
research
07/19/2017

Generalization Bounds of SGLD for Non-convex Learning: Two Theoretical Viewpoints

Algorithm-dependent generalization error bounds are central to statistic...
research
10/21/2020

On Random Subset Generalization Error Bounds and the Stochastic Gradient Langevin Dynamics Algorithm

In this work, we unify several expected generalization error bounds base...
research
01/12/2018

Generalization Error Bounds for Noisy, Iterative Algorithms

In statistical learning theory, generalization error is used to quantify...
research
05/27/2022

Generalization Bounds for Gradient Methods via Discrete and Continuous Prior

Proving algorithm-dependent generalization error bounds for gradient-typ...
research
06/09/2021

From inexact optimization to learning via gradient concentration

Optimization was recently shown to control the inductive bias in a learn...
research
02/05/2019

Distribution-Dependent Analysis of Gibbs-ERM Principle

Gibbs-ERM learning is a natural idealized model of learning with stochas...

Please sign up or login with your details

Forgot password? Click here to reset