PLU: The Piecewise Linear Unit Activation Function

09/03/2018
by   Andrei Nicolae, et al.
0

Successive linear transforms followed by nonlinear "activation" functions can approximate nonlinear functions to arbitrary precision given sufficient layers. The number of necessary layers is dependent on, in part, by the nature of the activation function. The hyperbolic tangent (tanh) has been a favorable choice as an activation until the networks grew deeper and the vanishing gradients posed a hindrance during training. For this reason the Rectified Linear Unit (ReLU) defined by max(0, x) has become the prevailing activation function in deep neural networks. Unlike the tanh function which is smooth, the ReLU yields networks that are piecewise linear functions with a limited number of facets. This paper presents a new activation function, the Piecewise Linear Unit (PLU) that is a hybrid of tanh and ReLU and shown to outperform the ReLU on a variety of tasks while avoiding the vanishing gradients issue of the tanh.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset