Symmetrical Gaussian Error Linear Units (SGELUs)

11/10/2019
by   Chao Yu, et al.
0

In this paper, a novel neural network activation function, called Symmetrical Gaussian Error Linear Unit (SGELU), is proposed to obtain high performance. It is achieved by effectively integrating the property of the stochastic regularizer in the Gaussian Error Linear Unit (GELU) with the symmetrical characteristics. Combining with these two merits, the proposed unit introduces the capability of the bidirection convergence to successfully optimize the network without the gradient diminishing problem. The evaluations of SGELU against GELU and Linearly Scaled Hyperbolic Tangent (LiSHT) have been carried out on MNIST classification and MNIST auto-encoder, which provide great validations in terms of the performance, the convergence rate among these applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/03/2022

Non-Vacuous Generalisation Bounds for Shallow Neural Networks

We focus on a specific class of shallow neural networks with a single hi...
research
05/08/2023

TaLU: A Hybrid Activation Function Combining Tanh and Rectified Linear Unit to Enhance Neural Networks

The application of the deep learning model in classification plays an im...
research
07/26/2018

Effectiveness of Scaled Exponentially-Regularized Linear Units (SERLUs)

Recently, self-normalizing neural networks (SNNs) have been proposed wit...
research
06/01/2016

Improving Deep Neural Network with Multiple Parametric Exponential Linear Units

Activation function is crucial to the recent successes of deep neural ne...
research
02/10/2016

A Theory of Generative ConvNet

We show that a generative random field model, which we call generative C...
research
10/19/2018

Leveraging Product as an Activation Function in Deep Networks

Product unit neural networks (PUNNs) are powerful representational model...

Please sign up or login with your details

Forgot password? Click here to reset