Fast Convergence on Perfect Classification for Functional Data
In this study, we investigate the availability of approaching to perfect classification on functional data with finite samples. The seminal work (Delaigle and Hall (2012)) showed that classification on functional data is easier to define on a perfect classifier than on finite-dimensional data. This result is based on their finding that a sufficient condition for the existence of a perfect classifier, named a Delaigle–Hall (DH) condition, is only available for functional data. However, there is a danger that a large sample size is required to achieve the perfect classification even though the DH condition holds because a convergence of misclassification errors of functional data is significantly slow. Specifically, a minimax rate of the convergence of errors with functional data has a logarithm order in the sample size. This study solves this complication by proving that the DH condition also achieves fast convergence of the misclassification error in sample size. Therefore, we study a classifier with empirical risk minimization using reproducing kernel Hilbert space (RKHS) and analyse its convergence rate under the DH condition. The result shows that the convergence speed of the misclassification error by the RKHS classifier has an exponential order in sample size. Technically, the proof is based on the following points: (i) connecting the DH condition and a margin of classifiers, and (ii) handling metric entropy of functional data. Experimentally, we validate that the DH condition and the associated margin condition have a certain impact on the convergence rate of the RKHS classifier. We also find that some of the other classifiers for functional data have a similar property.
READ FULL TEXT