Backprop Evolution

08/08/2018
by   Maximilian Alber, et al.
4

The back-propagation algorithm is the cornerstone of deep learning. Despite its importance, few variations of the algorithm have been attempted. This work presents an approach to discover new variations of the back-propagation equation. We use a domain specific lan- guage to describe update equations as a list of primitive functions. An evolution-based method is used to discover new propagation rules that maximize the generalization per- formance after a few epochs of training. We find several update equations that can train faster with short training times than standard back-propagation, and perform similar as standard back-propagation at convergence.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/06/2020

Finding online neural update rules by learning to remember

We investigate learning of the online local update rules for neural acti...
research
09/21/2017

Neural Optimizer Search with Reinforcement Learning

We present an approach to automate the process of discovering optimizati...
research
06/19/2012

An Improved Gauss-Newtons Method based Back-propagation Algorithm for Fast Convergence

The present work deals with an improved back-propagation algorithm based...
research
06/22/2019

Removing numerical dispersion from linear evolution equations

In this paper we describe a method for removing the numerical errors in ...
research
12/01/2017

Susceptibility Propagation by Using Diagonal Consistency

A susceptibility propagation that is constructed by combining a belief p...
research
12/15/2019

Dynamic and weighted stabilizations of the L-scheme applied to a phase-field model for fracture propagation

We consider a phase-field fracture propagation model, which consists of ...
research
07/01/2021

EqFix: Fixing LaTeX Equation Errors by Examples

LaTeX is a widely-used document preparation system. Its powerful ability...

Please sign up or login with your details

Forgot password? Click here to reset