An inexact subsampled proximal Newton-type method for large-scale machine learning

08/28/2017
by   Xuanqing Liu, et al.
0

We propose a fast proximal Newton-type algorithm for minimizing regularized finite sums that returns an ϵ-suboptimal point in Õ(d(n + √(κ d))(1/ϵ)) FLOPS, where n is number of samples, d is feature dimension, and κ is the condition number. As long as n > d, the proposed method is more efficient than state-of-the-art accelerated stochastic first-order methods for non-smooth regularizers which requires Õ(d(n + √(κ n))(1/ϵ)) FLOPS. The key idea is to form the subsampled Newton subproblem in a way that preserves the finite sum structure of the objective, thereby allowing us to leverage recent developments in stochastic first-order methods to solve the subproblem. Experimental results verify that the proposed algorithm outperforms previous algorithms for ℓ_1-regularized logistic regression on real datasets.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset