Deep Sparse Coding for Invariant Multimodal Halle Berry Neurons

11/21/2017
by   Edward Kim, et al.
0

Deep feed-forward convolutional neural networks (CNNs) have become ubiquitous in virtually all machine learning and computer vision challenges; however, advancements in CNNs have arguably reached an engineering saturation point where incremental novelty results in minor performance gains. Although there is evidence that object classification has reached human levels on narrowly defined tasks, for general applications, the biological visual system is far superior to that of any computer. Research reveals there are numerous missing components in feed-forward deep neural networks that are critical in mammalian vision. The brain does not work solely in a feed-forward fashion, but rather all of the neurons are in competition with each other; neurons are integrating information in a bottom up and top down fashion and incorporating expectation and feedback in the modeling process. Furthermore, our visual cortex is working in tandem with our parietal lobe, integrating sensory information from various modalities. In our work, we sought to improve upon the standard feed-forward deep learning model by augmenting them with biologically inspired concepts of sparsity, top-down feedback, and lateral inhibition. We define our model as a sparse coding problem using hierarchical layers. We solve the sparse coding problem with an additional top-down feedback error driving the dynamics of the neural network. While building and observing the behavior of our model, we were fascinated that multimodal, invariant neurons naturally emerged that mimicked, "Halle Berry neurons" found in the human brain. Furthermore, our sparse representation of multimodal signals demonstrates qualitative and quantitative superiority to the standard feed-forward joint embedding in common vision and machine learning tasks.

READ FULL TEXT

page 6

page 8

research
02/20/2019

Meaningful representations emerge from Sparse Deep Predictive Coding

Convolutional Neural Networks (CNNs) are the state-of-the-art algorithms...
research
08/06/2019

Refining the Structure of Neural Networks Using Matrix Conditioning

Deep learning models have proven to be exceptionally useful in performin...
research
07/21/2015

Bottom-Up and Top-Down Reasoning with Hierarchical Rectified Gaussians

Convolutional neural nets (CNNs) have demonstrated remarkable performanc...
research
06/08/2017

CortexNet: a Generic Network Family for Robust Visual Temporal Representations

In the past five years we have observed the rise of incredibly well perf...
research
12/24/2019

FHDR: HDR Image Reconstruction from a Single LDR Image using Feedback Network

High dynamic range (HDR) image generation from a single exposure low dyn...
research
02/04/2011

Evidence Feed Forward Hidden Markov Model: A New Type of Hidden Markov Model

The ability to predict the intentions of people based solely on their vi...
research
10/20/2017

Point Neurons with Conductance-Based Synapses in the Neural Engineering Framework

The mathematical model underlying the Neural Engineering Framework (NEF)...

Please sign up or login with your details

Forgot password? Click here to reset