Improving Performance in Neural Networks by Dendrites-Activated Connections

01/03/2023
by   Carlo Metta, et al.
0

Computational units in artificial neural networks compute a linear combination of their inputs, and then apply a nonlinear filter, often a ReLU shifted by some bias, and if the inputs come themselves from other units, they were already filtered with their own biases. In a layer, multiple units share the same inputs, and each input was filtered with a unique bias, resulting in output values being based on shared input biases rather than individual optimal ones. To mitigate this issue, we introduce DAC, a new computational unit based on preactivation and multiple biases, where input signals undergo independent nonlinear filtering before the linear combination. We provide a Keras implementation and report its computational efficiency. We test DAC convolutions in ResNet architectures on CIFAR-10, CIFAR-100, Imagenette, and Imagewoof, and achieve performance improvements of up to 1.73 examples where DAC is more efficient than its standard counterpart as a function approximator, and we prove a universal representation theorem.

READ FULL TEXT

page 3

page 4

research
02/01/2018

Training Neural Networks by Using Power Linear Units (PoLUs)

In this paper, we introduce "Power Linear Unit" (PoLU) which increases t...
research
12/14/2018

Why ReLU Units Sometimes Die: Analysis of Single-Unit Error Backpropagation in Neural Networks

Recently, neural networks in machine learning use rectified linear units...
research
12/20/2013

Improving Deep Neural Networks with Probabilistic Maxout Units

We present a probabilistic variant of the recently introduced maxout uni...
research
12/17/2017

Deep Neural Networks as 0-1 Mixed Integer Linear Programs: A Feasibility Study

Deep Neural Networks (DNNs) are very popular these days, and are the sub...
research
03/12/2018

R3Net: Random Weights, Rectifier Linear Units and Robustness for Artificial Neural Network

We consider a neural network architecture with randomized features, a si...
research
03/22/2022

Modelling continual learning in humans with Hebbian context gating and exponentially decaying task signals

Humans can learn several tasks in succession with minimal mutual interfe...
research
11/10/2022

Improving the Robustness of Neural Multiplication Units with Reversible Stochasticity

Multilayer Perceptrons struggle to learn certain simple arithmetic tasks...

Please sign up or login with your details

Forgot password? Click here to reset