DeepAI AI Chat
Log In Sign Up

Statistical and Computational Trade-Offs in Kernel K-Means

by   Daniele Calandriello, et al.

We investigate the efficiency of k-means in terms of both statistical and computational requirements. More precisely, we study a Nyström approach to kernel k-means. We analyze the statistical properties of the proposed method and show that it achieves the same accuracy of exact kernel k-means with only a fraction of computations. Indeed, we prove under basic assumptions that sampling √(n) Nyström landmarks allows to greatly reduce computational costs without incurring in any loss of accuracy. To the best of our knowledge this is the first result of this kind for unsupervised learning.


Nearly Optimal Risk Bounds for Kernel K-Means

In this paper, we study the statistical properties of the kernel k-means...

Gain with no Pain: Efficient Kernel-PCA by Nyström Sampling

In this paper, we propose and study a Nyström based approach to efficien...

Kernel Ridge Regression Using Importance Sampling with Application to Seismic Response Prediction

Scalable kernel methods, including kernel ridge regression, often rely o...

Kernel k-Means, By All Means: Algorithms and Strong Consistency

Kernel k-means clustering is a powerful tool for unsupervised learning o...

On Kernel Derivative Approximation with Random Fourier Features

Random Fourier features (RFF) represent one of the most popular and wide...

Generalization Properties of Learning with Random Features

We study the generalization properties of ridge regression with random f...

LCS Graph Kernel Based on Wasserstein Distance in Longest Common Subsequence Metric Space

For graph classification tasks, many methods use a common strategy to ag...