A Feature Selection Based on Perturbation Theory

by   Javad Rahimipour Anaraki, et al.

Consider a supervised dataset D=[A|b], where b is the outcome column, rows of D correspond to observations, and columns of A are the features of the dataset. A central problem in machine learning and pattern recognition is to select the most important features from D to be able to predict the outcome. In this paper, we provide a new feature selection method where we use perturbation theory to detect correlations between features. We solve AX=b using the method of least squares and singular value decomposition of A. In practical applications, such as in bioinformatics, the number of rows of A (observations) are much less than the number of columns of A (features). So we are dealing with singular matrices with big condition numbers. Although it is known that the solutions of least square problems in singular case are very sensitive to perturbations in A, our novel approach in this paper is to prove that the correlations between features can be detected by applying perturbations to A. The effectiveness of our method is verified by performing a series of comparisons with conventional and novel feature selection methods in the literature. It is demonstrated that in most situations, our method chooses considerably less number of features while attaining or exceeding the accuracy of the other methods.


page 1

page 2

page 3

page 4


High-Dimensional Feature Selection for Genomic Datasets

In the presence of large dimensional datasets that contain many irreleva...

Feature selection in weakly coherent matrices

A problem of paramount importance in both pure (Restricted Invertibility...

Clustering, multicollinearity, and singular vectors

Let A be a matrix with its pseudo-matrix A^† and set S=I-A^†A. We prove ...

SPSA-FSR: Simultaneous Perturbation Stochastic Approximation for Feature Selection and Ranking

This manuscript presents the following: (1) an improved version of the B...

A new robust feature selection method using variance-based sensitivity analysis

Excluding irrelevant features in a pattern recognition task plays an imp...

On singular value distribution of large dimensional data matrices whose columns have different correlations

Suppose Y_n=( y_1,..., y_n) is a p× n data matrix whose columns y_j, 1...

Feature Selection via Binary Simultaneous Perturbation Stochastic Approximation

Feature selection (FS) has become an indispensable task in dealing with ...

Please sign up or login with your details

Forgot password? Click here to reset