On Convergence of Nearest Neighbor Classifiers over Feature Transformations

by   Luka Rimanic, et al.

The k-Nearest Neighbors (kNN) classifier is a fundamental non-parametric machine learning algorithm. However, it is well known that it suffers from the curse of dimensionality, which is why in practice one often applies a kNN classifier on top of a (pre-trained) feature transformation. From a theoretical perspective, most, if not all theoretical results aimed at understanding the kNN classifier are derived for the raw feature space. This leads to an emerging gap between our theoretical understanding of kNN and its practical applications. In this paper, we take a first step towards bridging this gap. We provide a novel analysis on the convergence rates of a kNN classifier over transformed features. This analysis requires in-depth understanding of the properties that connect both the transformed space and the raw feature space. More precisely, we build our convergence bound upon two key properties of the transformed space: (1) safety – how well can one recover the raw posterior from the transformed space, and (2) smoothness – how complex this recovery function is. Based on our result, we are able to explain why some (pre-trained) feature transformations are better suited for a kNN classifier than other. We empirically validate that both properties have an impact on the kNN convergence on 30 feature transformations with 6 benchmark datasets spanning from the vision to the text domain.


page 1

page 2

page 3

page 4


Feature space transformations and model selection to improve the performance of classifiers

Improving the performance of classifiers is the realm of prototype selec...

A Two-Stage Active Learning Algorithm for k-Nearest Neighbors

We introduce a simple and intuitive two-stage active learning algorithm ...

Multi-hypothesis classifier

Accuracy is the most important parameter among few others which defines ...

Interpretable Transformations with Encoder-Decoder Networks

Deep feature spaces have the capacity to encode complex transformations ...

Intrinsic Dimension Estimation via Nearest Constrained Subspace Classifier

We consider the problems of classification and intrinsic dimension estim...

An Adaptive Tangent Feature Perspective of Neural Networks

In order to better understand feature learning in neural networks, we pr...

Optimal AdaBoost Converges

The following work is a preprint collection of formal proofs regarding t...

Please sign up or login with your details

Forgot password? Click here to reset