On Dropout and Nuclear Norm Regularization

05/28/2019
by   Poorya Mianjy, et al.
0

We give a formal and complete characterization of the explicit regularizer induced by dropout in deep linear networks with squared loss. We show that (a) the explicit regularizer is composed of an ℓ_2-path regularizer and other terms that are also re-scaling invariant, (b) the convex envelope of the induced regularizer is the squared nuclear norm of the network map, and (c) for a sufficiently large dropout rate, we characterize the global optima of the dropout objective. We validate our theoretical findings with empirical results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/13/2017

Dropout as a Low-Rank Regularizer for Matrix Factorization

Regularization for matrix factorization (MF) and approximation problems ...
research
03/06/2020

Dropout: Explicit Forms and Capacity Control

We investigate the capacity control provided by dropout in various machi...
research
07/04/2013

Dropout Training as Adaptive Regularization

Dropout and other feature noising schemes control overfitting by artific...
research
02/24/2021

Inductive Bias of Multi-Channel Linear Convolutional Networks with Bounded Weight Norm

We study the function space characterization of the inductive bias resul...
research
03/13/2023

Domain Generalization via Nuclear Norm Regularization

The ability to generalize to unseen domains is crucial for machine learn...
research
10/10/2017

An Analysis of Dropout for Matrix Factorization

Dropout is a simple yet effective algorithm for regularizing neural netw...
research
12/01/2020

Asymptotic convergence rate of Dropout on shallow linear neural networks

We analyze the convergence rate of gradient flows on objective functions...

Please sign up or login with your details

Forgot password? Click here to reset