Regularization of Deep Neural Networks with Spectral Dropout

11/23/2017
by   Salman Khan, et al.
0

The big breakthrough on the ImageNet challenge in 2012 was partially due to the `dropout' technique used to avoid overfitting. Here, we introduce a new approach called `Spectral Dropout' to improve the generalization ability of deep neural networks. We cast the proposed approach in the form of regular Convolutional Neural Network (CNN) weight layers using a decorrelation transform with fixed basis functions. Our spectral dropout method prevents overfitting by eliminating weak and `noisy' Fourier domain coefficients of the neural network activations, leading to remarkably better results than the current regularization methods. Furthermore, the proposed is very efficient due to the fixed basis functions used for spectral transformation. In particular, compared to Dropout and Drop-Connect, our method significantly speeds up the network convergence rate during the training process (roughly x2), with considerably higher neuron pruning rates (an increase of 30 that the spectral dropout can also be used in conjunction with other regularization approaches resulting in additional performance gains.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset