Finite-sample Analysis of M-estimators using Self-concordance

10/16/2018
by   Dmitrii Ostrovskii, et al.
0

We demonstrate how self-concordance of the loss can be exploited to obtain asymptotically optimal rates for M-estimators in finite-sample regimes. We consider two classes of losses: (i) canonically self-concordant losses in the sense of Nesterov and Nemirovski (1994), i.e., with the third derivative bounded with the 3/2 power of the second; (ii) pseudo self-concordant losses, for which the power is removed, as introduced by Bach (2010). These classes contain some losses arising in generalized linear models, including logistic regression; in addition, the second class includes some common pseudo-Huber losses. Our results consist in establishing the critical sample size sufficient to reach the asymptotically optimal excess risk for both classes of losses. Denoting d the parameter dimension, and d_eff the effective dimension which takes into account possible model misspecification, we find the critical sample size to be O(d_eff· d) for canonically self-concordant losses, and O(ρ· d_eff· d) for pseudo self-concordant losses, where ρ is the problem-dependent local curvature parameter. In contrast to the existing results, we only impose local assumptions on the data distribution, assuming that the calibrated design, i.e., the design scaled with the square root of the second derivative of the loss, is subgaussian at the best predictor θ_*. Moreover, we obtain the improved bounds on the critical sample size, scaling near-linearly in (d_eff,d), under the extra assumption that the calibrated design is subgaussian in the Dikin ellipsoid of θ_*. Motivated by these findings, we construct canonically self-concordant analogues of the Huber and logistic losses with improved statistical properties. Finally, we extend some of these results to ℓ_1-regularized M-estimators in high dimensions.

READ FULL TEXT
research
06/23/2020

An improved sample size calculation method for score tests in generalized linear models

Self and Mauritsen (1988) developed a sample size determination procedur...
research
10/10/2018

On the Properties of Simulation-based Estimators in High Dimensions

Considering the increasing size of available data, the need for statisti...
research
08/01/2019

Finite-sample properties of robust location and scale estimators

When the experimental data set is contaminated, we usually employ robust...
research
02/21/2023

Exploring Local Norms in Exp-concave Statistical Learning

We consider the problem of stochastic convex optimization with exp-conca...
research
02/15/2019

Asymptotic Finite Sample Information Losses in Neural Classifiers

This paper considers the subject of information losses arising from fini...
research
02/11/2019

Efficient Primal-Dual Algorithms for Large-Scale Multiclass Classification

We develop efficient algorithms to train ℓ_1-regularized linear classifi...
research
08/27/2020

Analytical and statistical properties of local depth functions motivated by clustering applications

Local depth functions (LDFs) are used for describing the local geometric...

Please sign up or login with your details

Forgot password? Click here to reset