Two-argument activation functions learn soft XOR operations like cortical neurons

10/13/2021
by   KiJung Yoon, et al.
0

Neurons in the brain are complex machines with distinct functional compartments that interact nonlinearly. In contrast, neurons in artificial neural networks abstract away this complexity, typically down to a scalar activation function of a weighted sum of inputs. Here we emulate more biologically realistic neurons by learning canonical activation functions with two input arguments, analogous to basal and apical dendrites. We use a network-in-network architecture where each neuron is modeled as a multilayer perceptron with two inputs and a single output. This inner perceptron is shared by all units in the outer network. Remarkably, the resultant nonlinearities often produce soft XOR functions, consistent with recent experimental observations about interactions between inputs in human cortical neurons. When hyperparameters are optimized, networks with these nonlinearities learn faster and perform better than conventional ReLU nonlinearities with matched parameter counts, and they are more robust to natural and adversarial perturbations.

READ FULL TEXT

page 4

page 6

research
11/07/2021

Biologically Inspired Oscillating Activation Functions Can Bridge the Performance Gap between Biological and Artificial Neurons

Nonlinear activation functions endow neural networks with the ability to...
research
07/15/2022

The Mechanical Neural Network(MNN) – A physical implementation of a multilayer perceptron for education and hands-on experimentation

In this paper the Mechanical Neural Network(MNN) is introduced, a physic...
research
03/13/2018

Conditional Activation for Diverse Neurons in Heterogeneous Networks

In this paper, we propose a new scheme for modelling the diverse behavio...
research
07/31/2022

Functional Rule Extraction Method for Artificial Neural Networks

The idea I propose in this paper is a method that is based on comprehens...
research
10/06/2019

Auto-Rotating Perceptrons

This paper proposes an improved design of the perceptron unit to mitigat...
research
06/29/2022

Automatic Synthesis of Neurons for Recurrent Neural Nets

We present a new class of neurons, ARNs, which give a cross entropy on t...

Please sign up or login with your details

Forgot password? Click here to reset