Automated Sizing and Training of Efficient Deep Autoencoders using Second Order Algorithms

by   Kanishka Tyagi, et al.

We propose a multi-step training method for designing generalized linear classifiers. First, an initial multi-class linear classifier is found through regression. Then validation error is minimized by pruning of unnecessary inputs. Simultaneously, desired outputs are improved via a method similar to the Ho-Kashyap rule. Next, the output discriminants are scaled to be net functions of sigmoidal output units in a generalized linear classifier. We then develop a family of batch training algorithm for the multi layer perceptron that optimizes its hidden layer size and number of training epochs. Next, we combine pruning with a growing approach. Later, the input units are scaled to be the net function of the sigmoidal output units that are then feed into as input to the MLP. We then propose resulting improvements in each of the deep learning blocks thereby improving the overall performance of the deep architecture. We discuss the principles and formulation regarding learning algorithms for deep autoencoders. We investigate several problems in deep autoencoders networks including training issues, the theoretical, mathematical and experimental justification that the networks are linear, optimizing the number of hidden units in each layer and determining the depth of the deep learning model. A direct implication of the current work is the ability to construct fast deep learning models using desktop level computational resources. This, in our opinion, promotes our design philosophy of building small but powerful algorithms. Performance gains are demonstrated at each step. Using widely available datasets, the final network's ten fold testing error is shown to be less than that of several other linear, generalized linear classifiers, multi layer perceptron and deep learners reported in the literature.


page 22

page 23

page 24


On the number of response regions of deep feed forward networks with piece-wise linear activations

This paper explores the complexity of deep feedforward networks with lin...

An Effective and Efficient Training Algorithm for Multi-layer Feedforward Neural Networks

Network initialization is the first and critical step for training neura...

Explicit Computation of Input Weights in Extreme Learning Machines

We present a closed form expression for initializing the input weights i...

Challenging On Car Racing Problem from OpenAI gym

This project challenges the car racing problem from OpenAI gym environme...

Techniques for Learning Binary Stochastic Feedforward Neural Networks

Stochastic binary hidden units in a multi-layer perceptron (MLP) network...

Informative regularization for a multi-layer perceptron RR Lyrae classifier under data shift

In recent decades, machine learning has provided valuable models and alg...

Meteorological time series forecasting with pruned multi-layer perceptron and 2-stage Levenberg-Marquardt method

A Multi-Layer Perceptron (MLP) defines a family of artificial neural net...

Please sign up or login with your details

Forgot password? Click here to reset