AutoInit: Analytic Signal-Preserving Weight Initialization for Neural Networks

by   Garrett Bingham, et al.

Neural networks require careful weight initialization to prevent signals from exploding or vanishing. Existing initialization schemes solve this problem in specific cases by assuming that the network has a certain activation function or topology. It is difficult to derive such weight initialization strategies, and modern architectures therefore often use these same initialization schemes even though their assumptions do not hold. This paper introduces AutoInit, a weight initialization algorithm that automatically adapts to different neural network architectures. By analytically tracking the mean and variance of signals as they propagate through the network, AutoInit is able to appropriately scale the weights at each layer to avoid exploding or vanishing signals. Experiments demonstrate that AutoInit improves performance of various convolutional and residual networks across a range of activation function, dropout, weight decay, learning rate, and normalizer settings. Further, in neural architecture search and activation function meta-learning, AutoInit automatically calculates specialized weight initialization strategies for thousands of unique architectures and hundreds of unique activation functions, and improves performance in vision, language, tabular, multi-task, and transfer learning scenarios. AutoInit thus serves as an automatic configuration tool that makes design of new neural network architectures more robust. The AutoInit package provides a wrapper around existing TensorFlow models and is available at


page 1

page 2

page 3

page 4


A Survey on Activation Functions and their relation with Xavier and He Normal Initialization

In artificial neural network, the activation function and the weight ini...

Optimizing Neural Networks through Activation Function Discovery and Automatic Weight Initialization

Automated machine learning (AutoML) methods improve upon existing models...

Growing an architecture for a neural network

We propose a new kind of automatic architecture search algorithm. The al...

Randomized Overdrive Neural Networks

By processing audio signals in the time-domain with randomly weighted te...

Efficient Activation Function Optimization through Surrogate Modeling

Carefully designed activation functions can improve the performance of n...

GradInit: Learning to Initialize Neural Networks for Stable and Efficient Training

Changes in neural architectures have fostered significant breakthroughs ...

Learning compositional functions via multiplicative weight updates

Compositionality is a basic structural feature of both biological and ar...

Please sign up or login with your details

Forgot password? Click here to reset