Efficient and robust high-dimensional sparse logistic regression via nonlinear primal-dual hybrid gradient algorithms

11/30/2021
by   Jérôme Darbon, et al.
0

Logistic regression is a widely used statistical model to describe the relationship between a binary response variable and predictor variables in data sets. It is often used in machine learning to identify important predictor variables. This task, variable selection, typically amounts to fitting a logistic regression model regularized by a convex combination of ℓ_1 and ℓ_2^2 penalties. Since modern big data sets can contain hundreds of thousands to billions of predictor variables, variable selection methods depend on efficient and robust optimization algorithms to perform well. State-of-the-art algorithms for variable selection, however, were not traditionally designed to handle big data sets; they either scale poorly in size or are prone to produce unreliable numerical results. It therefore remains challenging to perform variable selection on big data sets without access to adequate and costly computational resources. In this paper, we propose a nonlinear primal-dual algorithm that addresses these shortcomings. Specifically, we propose an iterative algorithm that provably computes a solution to a logistic regression problem regularized by an elastic net penalty in O(T(m,n)log(1/ϵ)) operations, where ϵ∈ (0,1) denotes the tolerance and T(m,n) denotes the number of arithmetic operations required to perform matrix-vector multiplication on a data set with m samples each comprising n features. This result improves on the known complexity bound of O(min(m^2n,mn^2)log(1/ϵ)) for first-order optimization methods such as the classic primal-dual hybrid gradient or forward-backward splitting methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/19/2017

varbvs: Fast Variable Selection for Large-scale Regression

We introduce varbvs, a suite of functions written in R and MATLAB for re...
research
01/13/2015

Random Bits Regression: a Strong General Predictor for Big Data

To improve accuracy and speed of regressions and classifications, we pre...
research
02/23/2017

A Unified Parallel Algorithm for Regularized Group PLS Scalable to Big Data

Partial Least Squares (PLS) methods have been heavily exploited to analy...
research
06/08/2023

Comprehensive Stepwise Selection for Logistic Regression

Automated variable selection is widely applied in statistical model deve...
research
05/02/2023

Slow Kill for Big Data Learning

Big-data applications often involve a vast number of observations and fe...
research
02/11/2020

Computationally efficient univariate filtering for massive data

The vast availability of large scale, massive and big data has increased...
research
12/22/2021

Regularized Multivariate Analysis Framework for Interpretable High-Dimensional Variable Selection

Multivariate Analysis (MVA) comprises a family of well-known methods for...

Please sign up or login with your details

Forgot password? Click here to reset