Embedded methods for feature selection in neural networks

10/12/2020
by   Vinay Varma K, et al.
0

The representational capacity of modern neural network architectures has made them a default choice in various applications with high dimensional feature sets. But these high dimensional and potentially noisy features combined with the black box models like neural networks negatively affect the interpretability, generalizability, and the training time of these models. Here, I propose two integrated approaches for feature selection that can be incorporated directly into the parameter learning. One of them involves adding a drop-in layer and performing sequential weight pruning. The other is a sensitivity-based approach. I benchmarked both the methods against Permutation Feature Importance (PFI) - a general-purpose feature ranking method and a random baseline. The suggested approaches turn out to be viable methods for feature selection, consistently outperform the baselines on the tested datasets - MNIST, ISOLET, and HAR. We can add them to any existing model with only a few lines of code.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro