What is the Generalized Delta Rule?
The generalized delta rule is a mathematically derived formula used to determine how to update a neural network during a (back propagation) training step. A neural network learns a function that maps an input to an output based on given example pairs of inputs and outputs. A set number of input and output pairs are presented repeatedly, in random order during the training. The generalized delta rule is used repeatedly during training to modify weights between node connections. Before training, the network has connection weights initialized with small, random numbers. The purpose of the weight modifications is to reduce the overall network error, which means to reduce the difference between the actual and expected output.
Why is this Useful?
The generalized delta rule is important in creating useful networks capable of learning complex relations between inputs and outputs. This rule also has mathematical utility because it is mathematically derived, which is an advantage over other early learning rules.