Decoupling Gating from Linearity

06/12/2019
by   Jonathan Fiat, et al.
0

ReLU neural-networks have been in the focus of many recent theoretical works, trying to explain their empirical success. Nonetheless, there is still a gap between current theoretical results and empirical observations, even in the case of shallow (one hidden-layer) networks. For example, in the task of memorizing a random sample of size m and dimension d, the best theoretical result requires the size of the network to be Ω̃(m^2/d), while empirically a network of size slightly larger than m/d is sufficient. To bridge this gap, we turn to study a simplified model for ReLU networks. We observe that a ReLU neuron is a product of a linear function with a gate (the latter determines whether the neuron is active or not), where both share a jointly trained weight vector. In this spirit, we introduce the Gated Linear Unit (GaLU), which simply decouples the linearity from the gating by assigning different vectors for each role. We show that GaLU networks allow us to get optimization and generalization results that are much stronger than those available for ReLU networks. Specifically, we show a memorization result for networks of size Ω̃(m/d), and improved generalization bounds. Finally, we show that in some scenarios, GaLU networks behave similarly to ReLU networks, hence proving to be a good choice of a simplified model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/11/2015

Improving neural networks with bunches of neurons modeled by Kumaraswamy units: Preliminary study

Deep neural networks have recently achieved state-of-the-art results in ...
research
07/17/2022

Nonparametric regression with modified ReLU networks

We consider regression estimation with modified ReLU neural networks in ...
research
04/01/2019

On the Power and Limitations of Random Features for Understanding Neural Networks

Recently, a spate of papers have provided positive theoretical results f...
research
10/17/2018

Finite sample expressive power of small-width ReLU networks

We study universal finite sample expressivity of neural networks, define...
research
02/06/2019

On the CVP for the root lattices via folding with deep ReLU neural networks

Point lattices and their decoding via neural networks are considered in ...
research
10/12/2018

On the Margin Theory of Feedforward Neural Networks

Past works have shown that, somewhat surprisingly, over-parametrization ...

Please sign up or login with your details

Forgot password? Click here to reset