Nearest Neighbour Based Estimates of Gradients: Sharp Nonasymptotic Bounds and Applications

by   Guillaume Ausset, et al.

Motivated by a wide variety of applications, ranging from stochastic optimization to dimension reduction through variable selection, the problem of estimating gradients accurately is of crucial importance in statistics and learning theory. We consider here the classic regression setup, where a real valued square integrable r.v. Y is to be predicted upon observing a (possibly high dimensional) random vector X by means of a predictive function f(X) as accurately as possible in the mean-squared sense and study a nearest-neighbour-based pointwise estimate of the gradient of the optimal predictive function, the regression function m(x)=𝔼[Y| X=x]. Under classic smoothness conditions combined with the assumption that the tails of Y-m(X) are sub-Gaussian, we prove nonasymptotic bounds improving upon those obtained for alternative estimation methods. Beyond the novel theoretical results established, several illustrative numerical experiments have been carried out. The latter provide strong empirical evidence that the estimation method proposed works very well for various statistical problems involving gradient estimation, namely dimensionality reduction, stochastic gradient descent optimization and quantifying disentanglement.


page 1

page 2

page 3

page 4

∙ 02/25/2020

Biased Stochastic Gradient Descent for Conditional Stochastic Optimization

Conditional Stochastic Optimization (CSO) covers a variety of applicatio...
∙ 08/25/2021

Heavy-tailed Streaming Statistical Estimation

We consider the task of heavy-tailed statistical estimation given stream...
∙ 03/06/2023

On Regression in Extreme Regions

In the classic regression problem, the value of a real-valued random var...
∙ 02/11/2023

Dimension Reduction and MARS

The multivariate adaptive regression spline (MARS) is one of the popular...
∙ 08/29/2019

Deep Learning and MARS: A Connection

We consider least squares regression estimates using deep neural network...
∙ 06/05/2019

Empirical Risk Minimization under Random Censorship: Theory and Practice

We consider the classic supervised learning problem, where a continuous ...
∙ 02/15/2022

A Statistical Learning View of Simple Kriging

In the Big Data era, with the ubiquity of geolocation sensors in particu...

Please sign up or login with your details

Forgot password? Click here to reset