Return of the Infinitesimal Jackknife

06/01/2018
by   Ryan Giordano, et al.
0

The error or variability of machine learning algorithms is often assessed by repeatedly re-fitting a model with different weighted versions of the observed data. The ubiquitous tools of cross validation (CV) and the bootstrap are examples of this technique. These methods are powerful in large part due to their model agnosticism but can be slow to run on modern, large data sets due to the need to repeatedly re-fit the model. In this work, we use a linear approximation to the dependence of the fitting procedure on the weights, producing results that can be faster than repeated re-fitting by orders of magnitude. We provide explicit finite-sample error bounds for the approximation in terms of a small number of simple, verifiable assumptions. Our results apply whether the weights and data are stochastic, deterministic, or even adversarially chosen, and so can be used as a tool for proving the accuracy of a wide variety of problems. As a corollary, we state mild regularity conditions under which our approximation consistently estimates true leave-k-out cross validation for any fixed k. We demonstrate the accuracy of our methods on a range of simulated and real datasets.

READ FULL TEXT
research
07/28/2019

A Higher-Order Swiss Army Infinitesimal Jackknife

Cross validation (CV) and the bootstrap are ubiquitous model-agnostic to...
research
01/03/2020

Leave-One-Out Cross-Validation for Bayesian Model Comparison in Large Data

Recently, new methods for model assessment, based on subsampling and pos...
research
06/23/2020

Approximate Cross-Validation for Structured Models

Many modern data analyses benefit from explicitly modeling dependence st...
research
08/24/2020

Approximate Cross-Validation with Low-Rank Data in High Dimensions

Many recent advances in machine learning are driven by a challenging tri...
research
12/24/2020

Leave Zero Out: Towards a No-Cross-Validation Approach for Model Selection

As the main workhorse for model selection, Cross Validation (CV) has ach...
research
03/29/2015

Cross-validation of matching correlation analysis by resampling matching weights

The strength of association between a pair of data vectors is represente...
research
09/17/2018

Span error bound for weighted SVM with applications in hyperparameter selection

Weighted SVM (or fuzzy SVM) is the most widely used SVM variant owning i...

Please sign up or login with your details

Forgot password? Click here to reset