Embedded Ensembles: Infinite Width Limit and Operating Regimes

02/24/2022
by   Maksim Velikanov, et al.
0

A memory efficient approach to ensembling neural networks is to share most weights among the ensembled models by means of a single reference network. We refer to this strategy as Embedded Ensembling (EE); its particular examples are BatchEnsembles and Monte-Carlo dropout ensembles. In this paper we perform a systematic theoretical and empirical analysis of embedded ensembles with different number of models. Theoretically, we use a Neural-Tangent-Kernel-based approach to derive the wide network limit of the gradient descent dynamics. In this limit, we identify two ensemble regimes - independent and collective - depending on the architecture and initialization strategy of ensemble models. We prove that in the independent regime the embedded ensemble behaves as an ensemble of independent models. We confirm our theoretical prediction with a wide range of experiments with finite networks, and further study empirically various effects such as transition between the two regimes, scaling of ensemble performance with the network width and number of models, and dependence of performance on a number of architecture and hyperparameter choices.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/11/2020

Bayesian Deep Ensembles via the Neural Tangent Kernel

We explore the link between deep ensembles and Gaussian processes (GPs) ...
research
06/13/2020

Collegial Ensembles

Modern neural network performance typically improves as model size incre...
research
10/18/2022

Disentangling the Predictive Variance of Deep Ensembles through the Neural Tangent Kernel

Identifying unfamiliar inputs, also known as out-of-distribution (OOD) d...
research
06/20/2020

Collective Learning by Ensembles of Altruistic Diversifying Neural Networks

Combining the predictions of collections of neural networks often outper...
research
10/29/2021

Training Integrable Parameterizations of Deep Neural Networks in the Infinite-Width Limit

To theoretically understand the behavior of trained deep neural networks...
research
02/20/2020

Kernel and Rich Regimes in Overparametrized Models

A recent line of work studies overparametrized neural networks in the "k...
research
12/21/2013

An empirical analysis of dropout in piecewise linear networks

The recently introduced dropout training criterion for neural networks h...

Please sign up or login with your details

Forgot password? Click here to reset