DeepAI AI Chat
Log In Sign Up

Activation Level

What is an Activation Level in Machine Learning?

The activation level of a node in an artificial neural network is the output generated by the activation function, or in some cases directly by a human trainer. An activation function sets the output behavior of each node, or “neuron” in an artificial neural network. This output is then used as input for the next node and so on until a desired solution to the original problem is found.

This output level is expressed as a real number, usually limited to the range 0 to 1, or –1 to 1. In the case of supervised learning, the initial input value is submitted externally to the network. In unsupervised learning, a hidded or output node receives this value from the last node layer’s activation function. 

What are ReLu - Rectified Linear Units?

The most common activation functions and levels in modern use involve rectified linear units (ReLU). This technique does not suffer from the vanishing gradient problem found in older Sigmoid or Tanh  functions.

Simple ReLu’s are expressed as: R(x) = max(0,x) i.e if x < 0 , R(x) = 0 and if x >= 0 , R(x) = x.
There are also variations for specific situations:
  • Noisy ReLUs – These can be extended to include Gaussian noise.
  • Leaky ReLUs – Allow for a small, positive gradient when the unit is not active.
  • Exponential Linear Units (ELU’s) – Attempt to make the mean activations closer to zero to speed up the learning process.