Convergence analysis of online algorithms for vector-valued kernel regression

09/14/2023
by   Michael Griebel, et al.
0

We consider the problem of approximating the regression function from noisy vector-valued data by an online learning algorithm using an appropriate reproducing kernel Hilbert space (RKHS) as prior. In an online algorithm, i.i.d. samples become available one by one by a random process and are successively processed to build approximations to the regression function. We are interested in the asymptotic performance of such online approximation algorithms and show that the expected squared error in the RKHS norm can be bounded by C^2 (m+1)^-s/(2+s), where m is the current number of processed data, the parameter 0<s≤ 1 expresses an additional smoothness assumption on the regression function and the constant C depends on the variance of the input noise, the smoothness of the regression function and further parameters of the algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/20/2023

Optimality of Robust Online Learning

In this paper, we study an online learning algorithm with a robust loss ...
research
08/10/2020

Deterministic error bounds for kernel-based learning techniques under bounded noise

We consider the problem of reconstructing a function from a finite set o...
research
03/10/2022

Asymptotic Bounds for Smoothness Parameter Estimates in Gaussian Process Interpolation

It is common to model a deterministic response function, such as the out...
research
04/01/2021

An Online Projection Estimator for Nonparametric Regression in Reproducing Kernel Hilbert Spaces

The goal of nonparametric regression is to recover an underlying regress...
research
01/23/2021

An Optimal Reduction of TV-Denoising to Adaptive Online Learning

We consider the problem of estimating a function from n noisy samples wh...
research
06/06/2016

Unsupervised classification of children's bodies using currents

Object classification according to their shape and size is of key import...
research
06/01/2023

Best L_p Isotonic Regressions, p ∈{0, 1, ∞}

Given a real-valued weighted function f on a finite dag, the L_p isotoni...

Please sign up or login with your details

Forgot password? Click here to reset