Supervised Generative Reconstruction: An Efficient Way To Flexibly Store and Recognize Patterns

12/13/2011
by   Tsvi Achler, et al.
0

Matching animal-like flexibility in recognition and the ability to quickly incorporate new information remains difficult. Limits are yet to be adequately addressed in neural models and recognition algorithms. This work proposes a configuration for recognition that maintains the same function of conventional algorithms but avoids combinatorial problems. Feedforward recognition algorithms such as classical artificial neural networks and machine learning algorithms are known to be subject to catastrophic interference and forgetting. Modifying or learning new information (associations between patterns and labels) causes loss of previously learned information. I demonstrate using mathematical analysis how supervised generative models, with feedforward and feedback connections, can emulate feedforward algorithms yet avoid catastrophic interference and forgetting. Learned information in generative models is stored in a more intuitive form that represents the fixed points or solutions of the network and moreover displays similar difficulties as cognitive phenomena. Brain-like capabilities and limits associated with generative models suggest the brain may perform recognition and store information using a similar approach. Because of the central role of recognition, progress understanding the underlying principles may reveal significant insight on how to better study and integrate with the brain.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/04/2021

On robustness of generative representations against catastrophic forgetting

Catastrophic forgetting of previously learned knowledge while learning n...
research
07/16/2017

Overcoming Catastrophic Interference by Conceptors

Catastrophic interference has been a major roadblock in the research of ...
research
12/09/2021

Reducing Catastrophic Forgetting in Self Organizing Maps with Internally-Induced Generative Replay

A lifelong learning agent is able to continually learn from potentially ...
research
11/12/2020

Artificial Neural Variability for Deep Learning: On Overfitting, Noise Memorization, and Catastrophic Forgetting

Deep learning is often criticized by two serious issues which rarely exi...
research
02/02/2022

GANSlider: How Users Control Generative Models for Images using Multiple Sliders with and without Feedforward Information

We investigate how multiple sliders with and without feedforward visuali...

Please sign up or login with your details

Forgot password? Click here to reset