Regularizing by the Variance of the Activations' Sample-Variances

11/21/2018
by   Etai Littwin, et al.
0

Normalization techniques play an important role in supporting efficient and often more effective training of deep neural networks. While conventional methods explicitly normalize the activations, we suggest to add a loss term instead. This new loss term encourages the variance of the activations to be stable and not vary from one random mini-batch to the next. As we prove, this encourages the activations to be distributed around a few distinct modes. We also show that if the inputs are from a mixture of two Gaussians, the new loss would either join the two together, or separate between them optimally in the LDA sense, depending on the prior probabilities. Finally, we are able to link the new regularization term to the batchnorm method, which provides it with a regularization perspective. Our experiments demonstrate an improvement in accuracy over the batchnorm technique for both CNNs and fully connected networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/15/2019

Online Normalization for Training Neural Networks

Online Normalization is a new technique for normalizing the hidden activ...
research
02/13/2020

Regularizing activations in neural networks via distribution matching with the Wasserstein metric

Regularization and normalization have become indispensable components in...
research
10/16/2020

Filtered Batch Normalization

It is a common assumption that the activation of different layers in neu...
research
02/27/2017

Low-Precision Batch-Normalized Activations

Artificial neural networks can be trained with relatively low-precision ...
research
06/01/2022

Neural Decoding with Optimization of Node Activations

The problem of maximum likelihood decoding with a neural decoder for err...
research
06/08/2017

Self-Normalizing Neural Networks

Deep Learning has revolutionized vision via convolutional neural network...
research
11/15/2022

REPAIR: REnormalizing Permuted Activations for Interpolation Repair

In this paper we look into the conjecture of Entezari et al.(2021) which...

Please sign up or login with your details

Forgot password? Click here to reset