Safe Grid Search with Optimal Complexity

10/12/2018
by   Eugene Ndiaye, et al.
0

Popular machine learning estimators involve regularization parameters that can be challenging to tune, and standard strategies rely on grid search for this task. In this paper, we revisit the techniques of approximating the regularization path up to predefined tolerance ϵ in a unified framework and show that its complexity is O(1/√(ϵ)) for uniformly convex loss of order d>0 and O(1/√(ϵ)) for Generalized Self-Concordant functions. This framework encompasses least-squares but also logistic regression (a case that as far as we know was not handled as precisely by previous works). We leverage our technique to provide refined bounds on the validation error as well as a practical algorithm for hyperparameter tuning. The later has global convergence guarantee when targeting a prescribed accuracy on the validation set. Last but not least, our approach helps relieving the practitioner from the (often neglected) task of selecting a stopping criterion when optimizing over the training set: our method automatically calibrates it based on the targeted accuracy on the validation set.

READ FULL TEXT
research
02/09/2015

Regularization Path of Cross-Validation Error Lower Bounds

Careful tuning of a regularization parameter is indispensable in many ma...
research
03/28/2017

Early Stopping without a Validation Set

Early stopping is a widely used technique to prevent poor generalization...
research
03/28/2017

Gradient-based Regularization Parameter Selection for Problems with Non-smooth Penalty Functions

In high-dimensional and/or non-parametric regression problems, regulariz...
research
10/08/2018

A Unified Dynamic Approach to Sparse Model Selection

Sparse model selection is ubiquitous from linear regression to graphical...
research
07/20/2022

Provably tuning the ElasticNet across instances

An important unresolved challenge in the theory of regularization is to ...
research
05/28/2019

LambdaOpt: Learn to Regularize Recommender Models in Finer Levels

Recommendation models mainly deal with categorical variables, such as us...

Please sign up or login with your details

Forgot password? Click here to reset