On the Implicit Bias of Dropout

06/26/2018
by   Poorya Mianjy, et al.
0

Algorithmic approaches endow deep learning systems with implicit bias that helps them generalize even in over-parametrized settings. In this paper, we focus on understanding such a bias induced in learning through dropout, a popular technique to avoid overfitting in deep learning. For single hidden-layer linear neural networks, we show that dropout tends to make the norm of incoming/outgoing weight vectors of all the hidden nodes equal. In addition, we provide a complete characterization of the optimization landscape induced by dropout.

READ FULL TEXT
research
07/13/2022

Implicit regularization of dropout

It is important to understand how the popular regularization method drop...
research
10/30/2019

On the Regularization Properties of Structured Dropout

Dropout and its extensions (eg. DropBlock and DropConnect) are popular h...
research
06/20/2017

Analysis of dropout learning regarded as ensemble learning

Deep learning is the state-of-the-art in fields such as visual object re...
research
04/27/2022

Dropout Inference with Non-Uniform Weight Scaling

Dropout as regularization has been used extensively to prevent overfitti...
research
06/06/2015

Dropout as a Bayesian Approximation: Appendix

We show that a neural network with arbitrary depth and non-linearities, ...
research
07/11/2014

Altitude Training: Strong Bounds for Single-Layer Dropout

Dropout training, originally designed for deep neural networks, has been...
research
11/09/2017

Analysis of Dropout in Online Learning

Deep learning is the state-of-the-art in fields such as visual object re...

Please sign up or login with your details

Forgot password? Click here to reset