Comparison theorems on large-margin learning
This paper studies binary classification problem associated with a family of loss functions called large-margin unified machines (LUM), which offers a natural bridge between distribution-based likelihood approaches and margin-based approaches. It also can overcome the so-called data piling issue of support vector machine in the high-dimension and low-sample size setting. In this paper we establish some new comparison theorems for all LUM loss functions which play a key role in the further error analysis of large-margin learning algorithms.
READ FULL TEXT