Batch Normalization Explained

09/29/2022
by   Randall Balestriero, et al.
7

A critically important, ubiquitous, and yet poorly understood ingredient in modern deep networks (DNs) is batch normalization (BN), which centers and normalizes the feature maps. To date, only limited progress has been made understanding why BN boosts DN learning and inference performance; work has focused exclusively on showing that BN smooths a DN's loss landscape. In this paper, we study BN theoretically from the perspective of function approximation; we exploit the fact that most of today's state-of-the-art DNs are continuous piecewise affine (CPA) splines that fit a predictor to the training data via affine mappings defined over a partition of the input space (the so-called "linear regions"). We demonstrate that BN is an unsupervised learning technique that – independent of the DN's weights or gradient-based learning – adapts the geometry of a DN's spline partition to match the data. BN provides a "smart initialization" that boosts the performance of DN learning, because it adapts even a DN initialized with random weights to align its spline partition with the data. We also show that the variation of BN statistics between mini-batches introduces a dropout-like random perturbation to the partition boundaries and hence the decision boundary for classification problems. This per mini-batch perturbation reduces overfitting and improves generalization by increasing the margin between the training samples and the decision boundary.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/02/2021

Batch Normalization Preconditioning for Neural Network Training

Batch normalization (BN) is a popular and ubiquitous method in deep lear...
research
05/21/2019

The Geometry of Deep Networks: Power Diagram Subdivision

We study the geometry of deep (neural) networks (DNs) with piecewise aff...
research
02/13/2020

Cross-Iteration Batch Normalization

A well-known issue of Batch Normalization is its significantly reduced e...
research
01/07/2021

Max-Affine Spline Insights Into Deep Network Pruning

In this paper, we study the importance of pruning in Deep Networks (DNs)...
research
08/12/2021

Logit Attenuating Weight Normalization

Over-parameterized deep networks trained using gradient-based optimizers...
research
10/25/2021

Some like it tough: Improving model generalization via progressively increasing the training difficulty

In this work, we propose to progressively increase the training difficul...
research
05/17/2018

A Spline Theory of Deep Networks (Extended Version)

We build a rigorous bridge between deep networks (DNs) and approximation...

Please sign up or login with your details

Forgot password? Click here to reset