ProbAct: A Probabilistic Activation Function for Deep Neural Networks

by   Joonho Lee, et al.

Activation functions play an important role in the training of artificial neural networks and the Rectified Linear Unit (ReLU) has been the mainstream in recent years. Most of the activation functions currently used are deterministic in nature, whose input-output relationship is fixed. In this work, we propose a probabilistic activation function, called ProbAct. The output value of ProbAct is sampled from a normal distribution, with the mean value same as the output of ReLU and with a fixed or trainable variance for each element. In the trainable ProbAct, the variance of the activation distribution is trained through back-propagation. We also show that the stochastic perturbation through ProbAct is a viable generalization technique that can prevent overfitting. In our experiments, we demonstrate that when using ProbAct, it is possible to boost the image classification performance on CIFAR-10, CIFAR-100, and STL-10 datasets.


page 1

page 2

page 3

page 4


Trainable Activation Function in Image Classification

In the current research of neural networks, the activation function is m...

Trainable Activation Function Supported CNN in Image Classification

In the current research of neural networks, the activation function is m...

Phish: A Novel Hyper-Optimizable Activation Function

Deep-learning models estimate values using backpropagation. The activati...

Learning DNN networks using un-rectifying ReLU with compressed sensing application

The un-rectifying technique expresses a non-linear point-wise activation...

Dynamical Isometry is Achieved in Residual Networks in a Universal Way for any Activation Function

We demonstrate that in residual neural networks (ResNets) dynamical isom...

An Investigation on Deep Learning with Beta Stabilizer

Artificial neural networks (ANN) have been used in many applications suc...

Logical Activation Functions: Logit-space equivalents of Boolean Operators

Neuronal representations within artificial neural networks are commonly ...

Please sign up or login with your details

Forgot password? Click here to reset