Biologically plausible deep learning

11/19/2018
by   Yali Amit, et al.
0

Building on the model proposed in Lillicrap et. al. we show that deep networks can be trained using biologically plausible Hebbian rules yielding similar performance to ordinary back-propagation. To overcome the unrealistic symmetry in connections between layers, implicit in back-propagation, the feedback weights are separate from the feedforward weights. But, in contrast to Lillicrap et. al., they are also updated with a local rule - a weight is updated solely based on the activity of the units it connects. With fixed feedback weights the performance degrades quickly as the depth of the network increases, when they are updated the performance is comparable to regular back-propagation. We also propose a cost function whose derivative can be represented as a local update rule on the last layer. Convolutional layers are updated with tied weights across space, which is not biologically plausible. We show that similar performance is achieved with sparse layers corresponding to the connectivity implied by the convolutional layers, but where weights are untied and updated separately. In the linear case we show theoretically that the convergence of the error to zero is accelerated by the update of the feedback weights.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset