On the consistency of Multithreshold Entropy Linear Classifier

Multithreshold Entropy Linear Classifier (MELC) is a recent classifier idea which employs information theoretic concept in order to create a multithreshold maximum margin model. In this paper we analyze its consistency over multithreshold linear models and show that its objective function upper bounds the amount of misclassified points in a similar manner like hinge loss does in support vector machines. For further confirmation we also conduct some numerical experiments on five datasets.

READ FULL TEXT
research
01/29/2013

On the Consistency of the Bootstrap Approach for Support Vector Machines and Related Kernel Based Methods

It is shown that bootstrap approximations of support vector machines (SV...
research
12/27/2018

Optimal Margin Distribution Network

Recent research about margin theory has proved that maximizing the minim...
research
05/28/2021

Support vector machines and linear regression coincide with very high-dimensional features

The support vector machine (SVM) and minimum Euclidean norm least square...
research
12/07/2022

Tight bounds for maximum ℓ_1-margin classifiers

Popular iterative algorithms such as boosting methods and coordinate des...
research
08/04/2014

Multithreshold Entropy Linear Classifier

Linear classifiers separate the data with a hyperplane. In this paper we...
research
03/11/2021

Wasserstein Robust Support Vector Machines with Fairness Constraints

We propose a distributionally robust support vector machine with a fairn...
research
06/27/2012

Discriminative Learning via Semidefinite Probabilistic Models

Discriminative linear models are a popular tool in machine learning. The...

Please sign up or login with your details

Forgot password? Click here to reset