Subsampling Winner Algorithm for Feature Selection in Large Regression Data

by   Yiying Fan, et al.

Feature selection from a large number of covariates (aka features) in a regression analysis remains a challenge in data science, especially in terms of its potential of scaling to ever-enlarging data and finding a group of scientifically meaningful features. For example, to develop new, responsive drug targets for ovarian cancer, the actual false discovery rate (FDR) of a practical feature selection procedure must also match the target FDR. The popular approach to feature selection, when true features are sparse, is to use a penalized likelihood or a shrinkage estimation, such as a LASSO, SCAD, Elastic Net, or MCP procedure (call them benchmark procedures). We present a different approach using a new subsampling method, called the Subsampling Winner algorithm (SWA). The central idea of SWA is analogous to that used for the selection of US national merit scholars. SWA uses a "base procedure" to analyze each of the subsamples, computes the scores of all features according to the performance of each feature from all subsample analyses, obtains the "semifinalist" based on the resulting scores, and then determines the "finalists," i.e., the most important features. Due to its subsampling nature, SWA can scale to data of any dimension in principle. The SWA also has the best-controlled actual FDR in comparison with the benchmark procedures and the randomForest, while having a competitive true-feature discovery rate. We also suggest practical add-on strategies to SWA with or without a penalized benchmark procedure to further assure the chance of "true" discovery. Our application of SWA to the ovarian serous cystadenocarcinoma specimens from the Broad Institute revealed functionally important genes and pathways, which we verified by additional genomics tools. This second-stage investigation is essential in the current discussion of the proper use of P-values.


page 1

page 2

page 3

page 4


Error-based Knockoffs Inference for Controlled Feature Selection

Recently, the scheme of model-X knockoffs was proposed as a promising so...

Stepdown SLOPE for Controlled Feature Selection

Sorted L-One Penalized Estimation (SLOPE) has shown the nice theoretical...

Feature-weighted elastic net: using "features of features" for better prediction

In some supervised learning settings, the practitioner might have additi...

Feature-specific inference for penalized regression using local false discovery rates

Penalized regression methods, most notably the lasso, are a popular appr...

Normalizing Flows for Knockoff-free Controlled Feature Selection

The goal of controlled feature selection is to discover the features a r...

Elastic Net based Feature Ranking and Selection

Feature selection is important in data representation and intelligent di...

Knockoffs for the mass: new feature importance statistics with false discovery guarantees

An important problem in machine learning and statistics is to identify f...

Please sign up or login with your details

Forgot password? Click here to reset