Efficient online learning with kernels for adversarial large scale problems

by   Rémi Jézéquel, et al.

We are interested in a framework of online learning with kernels for low-dimensional but large-scale and potentially adversarial datasets. Considering the Gaussian kernel, we study the computational and theoretical performance of online variations of kernel Ridge regression. The resulting algorithm is based on approximations of the Gaussian kernel through Taylor expansion. It achieves for d-dimensional inputs a (close to) optimal regret of order O(( n)^d+1) with per-round time complexity and space complexity O(( n)^2d). This makes the algorithm a suitable choice as soon as n ≫ e^d which is likely to happen in a scenario with small dimensional and large-scale dataset.


page 1

page 2

page 3

page 4


Mercer Large-Scale Kernel Machines from Ridge Function Perspective

To present Mercer large-scale kernel machines from a ridge function pers...

Efficient Online Learning for Dynamic k-Clustering

We study dynamic clustering problems from the perspective of online lear...

ParK: Sound and Efficient Kernel Ridge Regression by Feature Space Partitions

We introduce ParK, a new large-scale solver for kernel ridge regression....

Isolation Kernel: The X Factor in Efficient and Effective Large Scale Online Kernel Learning

Large scale online kernel learning aims to build an efficient and scalab...

Nyström Subspace Learning for Large-scale SVMs

As an implementation of the Nyström method, Nyström computational regula...

Scalable Kernel Learning via the Discriminant Information

Kernel approximation methods have been popular techniques for scalable k...

Online Network Source Optimization with Graph-Kernel MAB

We propose Grab-UCB, a graph-kernel multi-arms bandit algorithm to learn...

Please sign up or login with your details

Forgot password? Click here to reset