Population Gradients improve performance across data-sets and architectures in object classification

by   Yurika Sakai, et al.

The most successful methods such as ReLU transfer functions, batch normalization, Xavier initialization, dropout, learning rate decay, or dynamic optimizers, have become standards in the field due, particularly, to their ability to increase the performance of Neural Networks (NNs) significantly and in almost all situations. Here we present a new method to calculate the gradients while training NNs, and show that it significantly improves final performance across architectures, data-sets, hyper-parameter values, training length, and model sizes, including when it is being combined with other common performance-improving methods (such as the ones mentioned above). Besides being effective in the wide array situations that we have tested, the increase in performance (e.g. F1) it provides is as high or higher than this one of all the other widespread performance-improving methods that we have compared against. We call our method Population Gradients (PG), and it consists on using a population of NNs to calculate a non-local estimation of the gradient, which is closer to the theoretical exact gradient (i.e. this one obtainable only with an infinitely big data-set) of the error function than the empirical gradient (i.e. this one obtained with the real finite data-set).


page 1

page 4

page 7


k-decay: A New Method For Learning Rate Schedule

It is well known that the learning rate is the most important hyper-para...

Training Neural Networks for and by Interpolation

The majority of modern deep learning models are able to interpolate the ...

The Disharmony Between BN and ReLU Causes Gradient Explosion, but is Offset by the Correlation Between Activations

Deep neural networks based on batch normalization and ReLU-like activati...

Dynamic Batch Adaptation

Current deep learning adaptive optimizer methods adjust the step magnitu...

So2Sat POP – A Curated Benchmark Data Set for Population Estimation from Space on a Continental Scale

Obtaining a dynamic population distribution is key to many decision-maki...

How Does BN Increase Collapsed Neural Network Filters?

Improving sparsity of deep neural networks (DNNs) is essential for netwo...

Please sign up or login with your details

Forgot password? Click here to reset