Learning with SGD and Random Features

07/17/2018
by   Luigi Carratino, et al.
2

Sketching and stochastic gradient methods are arguably the most common tech- niques to derive efficient large-scale learning algorithms. In this paper, we investigate their application in the context of nonparametric statistical learning. More precisely, we study the estimator defined by stochastic gradients with mini batches and ran- dom features. The latter can be seen as a form of nonlinear sketching and used to define approximate kernel methods. The estimator we consider is not explicitly penalized/constrained and regularization is implicit. Indeed, our study highlight how different parameters, such as the number of features, iterations, step-size and mini- batch size control the learning properties of the solutions. We do this by deriving optimal finite sample bounds, under standard assumptions. The obtained results are corroborated and illustrated by numerical experiments.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset