Memorization Capacity of Neural Networks with Conditional Computation

03/20/2023
by   Erdem Koyuncu, et al.
0

Many empirical studies have demonstrated the performance benefits of conditional computation in neural networks, including reduced inference time and power consumption. We study the fundamental limits of neural conditional computation from the perspective of memorization capacity. For Rectified Linear Unit (ReLU) networks without conditional computation, it is known that memorizing a collection of n input-output relationships can be accomplished via a neural network with O(√(n)) neurons. Calculating the output of this neural network can be accomplished using O(√(n)) elementary arithmetic operations of additions, multiplications and comparisons for each input. Using a conditional ReLU network, we show that the same task can be accomplished using only O(log n) operations per input. This represents an almost exponential improvement as compared to networks without conditional computation. We also show that the Θ(log n) rate is the best possible. Our achievability result utilizes a general methodology to synthesize a conditional network out of an unconditional network in a computationally-efficient manner, bridging the gap between unconditional and conditional architectures.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset