Efficient SVDD Sampling with Approximation Guarantees for the Decision
Boundary
Support Vector Data Description (SVDD) is a popular one-class classifiers for
anomaly and novelty detection. But despite its effectiveness, SVDD does not
scale well with data size. To avoid prohibitive training times, sampling
methods select small subsets of the training data on which SVDD trains a
decision boundary hopefully equivalent to the one obtained on the full data
set. According to the literature, a good sample should therefore contain
so-called boundary observations that SVDD would select as support vectors on
the full data set. However, non-boundary observations also are essential to not
fragment contiguous inlier regions and avoid poor classification accuracy.
Other aspects, such as selecting a sufficiently representative sample, are
important as well. But existing sampling methods largely overlook them,
resulting in poor classification accuracy. In this article, we study how to
select a sample considering these points. Our approach is to frame SVDD
sampling as an optimization problem, where constraints guarantee that sampling
indeed approximates the original decision boundary. We then propose RAPID, an
efficient algorithm to solve this optimization problem. RAPID does not require
any tuning of parameters, is easy to implement and scales well to large data
sets. We evaluate our approach on real-world and synthetic data. Our evaluation
is the most comprehensive one for SVDD sampling so far. Our results show that
RAPID outperforms its competitors in classification accuracy, in sample size,
and in runtime.
READ FULL TEXT