Fundamental Limits of Ridge-Regularized Empirical Risk Minimization in High Dimensions

by   Hossein Taheri, et al.

Empirical Risk Minimization (ERM) algorithms are widely used in a variety of estimation and prediction tasks in signal-processing and machine learning applications. Despite their popularity, a theory that explains their statistical properties in modern regimes where both the number of measurements and the number of unknown parameters is large is only recently emerging. In this paper, we characterize for the first time the fundamental limits on the statistical accuracy of convex ERM for inference in high-dimensional generalized linear models. For a stylized setting with Gaussian features and problem dimensions that grow large at a proportional rate, we start with sharp performance characterizations and then derive tight lower bounds on the estimation and prediction error that hold over a wide class of loss functions and for any value of the regularization parameter. Our precise analysis has several attributes. First, it leads to a recipe for optimally tuning the loss function and the regularization parameter. Second, it allows to precisely quantify the sub-optimality of popular heuristic choices: for instance, we show that optimally-tuned least-squares is (perhaps surprisingly) approximately optimal for standard logistic data, but the sub-optimality gap grows drastically as the signal strength increases. Third, we use the bounds to precisely assess the merits of ridge-regularization as a function of the over-parameterization ratio. Notably, our bounds are expressed in terms of the Fisher Information of random variables that are simple functions of the data distribution, thus making ties to corresponding bounds in classical statistics.


page 1

page 2

page 3

page 4


Sharp Asymptotics and Optimal Performance for Inference in Binary Models

We study convex empirical risk minimization for high-dimensional inferen...

The distribution of Ridgeless least squares interpolators

The Ridgeless minimum ℓ_2-norm interpolator in overparametrized linear r...

Chaining Bounds for Empirical Risk Minimization

This paper extends the standard chaining technique to prove excess risk ...

On the Optimality of Averaging in Distributed Statistical Learning

A common approach to statistical learning with big-data is to randomly s...

High Dimensional Classification via Empirical Risk Minimization: Improvements and Optimality

In this article, we investigate a family of classification algorithms de...

On Suboptimality of Least Squares with Application to Estimation of Convex Bodies

We develop a technique for establishing lower bounds on the sample compl...

Theoretical Insights Into Multiclass Classification: A High-dimensional Asymptotic View

Contemporary machine learning applications often involve classification ...

Please sign up or login with your details

Forgot password? Click here to reset