PILAE: A Non-gradient Descent Learning Scheme for Deep Feedforward Neural Networks

11/05/2018
by   P. Guo, et al.
0

In this work, a non-gradient descent learning scheme is proposed for deep feedforward neural networks (DNN). As we known, autoencoder can be used as the building blocks of the multi-layer perceptron (MLP) deep neural network. So, the MLP will be taken as an example to illustrate the proposed scheme of pseudoinverse learning algorithm for autoencoder (PILAE) training. The PILAE with low rank approximation is a non-gradient based learning algorithm, and the encoder weight matrix is set to be the low rank approximation of the pseudoinverse of the input matrix, while the decoder weight matrix is calculated by the pseudoinverse learning algorithm. It is worth to note that only few network structure hyperparameters need to be tuned. Hence, the proposed algorithm can be regarded as a quasi-automated training algorithm which can be utilized in autonomous machine learning research field. The experimental results show that the proposed learning scheme for DNN can achieve better performance on considering the tradeoff between training efficiency and classification accuracy.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset