Lasso tuning through the flexible-weighted bootstrap
Regularized regression approaches such as the Lasso have been widely adopted for constructing sparse linear models in high-dimensional datasets. A complexity in fitting these models is the tuning of the parameters which control the level of introduced sparsity through penalization. The most common approach to select the penalty parameter is through k-fold cross-validation. While cross-validation is used to minimise the empirical prediction error, approaches such as the m-out-of-n paired bootstrap which use smaller training datasets provide consistency in selecting the non-zero coefficients in the oracle model, performing well in an asymptotic setting but having limitations when n is small. In fact, for models such as the Lasso there is a monotonic relationship between the size of training sets and the penalty parameter. We propose a generalization of these methods for selecting the regularization parameter based on a flexible-weighted bootstrap procedure that mimics the m-out-of-n bootstrap and overcomes its challenges for all sample sizes. Through simulation studies we demonstrate that when selecting a penalty parameter, the choice of weights in the bootstrap procedure can be used to dictate the size of the penalty parameter and hence the sparsity of the fitted model. We empirically illustrate our weighted bootstrap procedure by applying the Lasso to integrate clinical and microRNA data in the modeling of Alzheimer's disease. In both the real and simulated data we find a narrow part of the parameter space to perform well, emulating an m-out-of-n bootstrap, and that our procedure can be used to improve interpretation of other optimization heuristics.
READ FULL TEXT