Adaptivity for Regularized Kernel Methods by Lepskii's Principle

04/15/2018
by   Nicole Mücke, et al.
0

We address the problem of adaptivity in the framework of reproducing kernel Hilbert space (RKHS) regression. More precisely, we analyze estimators arising from a linear regularization scheme g_. In practical applications, an important task is to choose the regularization parameter appropriately, i.e. based only on the given data and independently on unknown structural assumptions on the regression function. An attractive approach avoiding data-splitting is the Lepskii Principle (LP), also known as the Balancing Principle is this setting. We show that a modified parameter choice based on (LP) is minimax optimal adaptive, up to (n). A convenient result is the fact that balancing in L^2(ν)- norm, which is easiest, automatically gives optimal balancing in all stronger norms, interpolating between L^2(ν) and the RKHS. An analogous result is open for other classical approaches to data dependent choices of the regularization parameter, e.g. for Hold-Out.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/26/2019

Lepskii Principle in Supervised Learning

In the setting of supervised learning using reproducing kernel methods, ...
research
03/27/2011

Fast Learning Rate of lp-MKL and its Minimax Optimality

In this paper, we give a new sharp generalization bound of lp-MKL which ...
research
07/08/2016

Convergence rates of Kernel Conjugate Gradient for random design regression

We prove statistical rates of convergence for kernel-based least squares...
research
05/17/2018

Minimax regularization

Classical approach to regularization is to design norms enhancing smooth...
research
12/30/2019

A Parameter Choice Rule for Tikhonov Regularization Based on Predictive Risk

In this work, we propose a new criterion for choosing the regularization...
research
12/21/2020

Nonlinear Tikhonov regularization in Hilbert scales with oversmoothing penalty: inspecting balancing principles

The analysis of Tikhonov regularization for nonlinear ill-posed equation...
research
12/02/2019

Relating lp regularization and reweighted l1 regularization

We propose a general framework of iteratively reweighted l1 methods for ...

Please sign up or login with your details

Forgot password? Click here to reset