Efficiently Using Second Order Information in Large l1 Regularization Problems

03/27/2013
by   Xiaocheng Tang, et al.
0

We propose a novel general algorithm LHAC that efficiently uses second-order information to train a class of large-scale l1-regularized problems. Our method executes cheap iterations while achieving fast local convergence rate by exploiting the special structure of a low-rank matrix, constructed via quasi-Newton approximation of the Hessian of the smooth loss function. A greedy active-set strategy, based on the largest violations in the dual constraints, is employed to maintain a working set that iteratively estimates the complement of the optimal active set. This allows for smaller size of subproblems and eventually identifies the optimal active set. Empirical comparisons confirm that LHAC is highly competitive with several recently proposed state-of-the-art specialized solvers for sparse logistic regression and sparse inverse covariance matrix selection.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/01/2016

A multilevel framework for sparse optimization with application to inverse covariance estimation and logistic regression

Solving l1 regularized optimization problems is common in the fields of ...
research
06/17/2020

Structured Stochastic Quasi-Newton Methods for Large-Scale Optimization Problems

In this paper, we consider large-scale finite-sum nonconvex problems ari...
research
07/17/2022

SP2: A Second Order Stochastic Polyak Method

Recently the "SP" (Stochastic Polyak step size) method has emerged as a ...
research
02/16/2018

WHInter: A Working set algorithm for High-dimensional sparse second order Interaction models

Learning sparse linear models with two-way interactions is desirable in ...
research
06/27/2012

Efficient and Practical Stochastic Subgradient Descent for Nuclear Norm Regularization

We describe novel subgradient methods for a broad class of matrix optimi...
research
07/20/2018

A Fast, Principled Working Set Algorithm for Exploiting Piecewise Linear Structure in Convex Problems

By reducing optimization to a sequence of smaller subproblems, working s...
research
06/20/2018

A Distributed Second-Order Algorithm You Can Trust

Due to the rapid growth of data and computational resources, distributed...

Please sign up or login with your details

Forgot password? Click here to reset