Tangent Space Separability in Feedforward Neural Networks

12/18/2019
by   Bálint Daróczy, et al.
0

Hierarchical neural networks are exponentially more efficient than their corresponding "shallow" counterpart with the same expressive power, but involve huge number of parameters and require tedious amounts of training. By approximating the tangent subspace, we suggest a sparse representation that enables switching to shallow networks, GradNet after a very early training stage. Our experiments show that the proposed approximation of the metric improves and sometimes even surpasses the achievable performance of the original network significantly even after a few epochs of training the original feedforward network.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/17/2018

Expressive power of outer product manifolds on feed-forward neural networks

Hierarchical neural networks are exponentially more efficient than their...
research
05/18/2018

Tropical Geometry of Deep Neural Networks

We establish, for the first time, connections between feedforward neural...
research
09/27/2015

Representation Benefits of Deep Feedforward Networks

This note provides a family of classification problems, indexed by a pos...
research
03/30/2018

Hierarchical Transfer Convolutional Neural Networks for Image Classification

In this paper, we address the issue of how to enhance the generalization...
research
05/23/2019

Tucker Decomposition Network: Expressive Power and Comparison

Deep neural networks have achieved a great success in solving many machi...
research
05/10/2005

Distant generalization by feedforward neural networks

This paper discusses the notion of generalization of training samples ov...
research
04/05/2022

Imaging Conductivity from Current Density Magnitude using Neural Networks

Conductivity imaging represents one of the most important tasks in medic...

Please sign up or login with your details

Forgot password? Click here to reset