Bidirectional Loss Function for Label Enhancement and Distribution Learning

by   Xinyuan Liu, et al.

Label distribution learning (LDL) is an interpretable and general learning paradigm that has been applied in many real-world applications. In contrast to the simple logical vector in single-label learning (SLL) and multi-label learning (MLL), LDL assigns labels with a description degree to each instance. In practice, two challenges exist in LDL, namely, how to address the dimensional gap problem during the learning process of LDL and how to exactly recover label distributions from existing logical labels, i.e., Label Enhancement (LE). For most existing LDL and LE algorithms, the fact that the dimension of the input matrix is much higher than that of the output one is alway ignored and it typically leads to the dimensional reduction owing to the unidirectional projection. The valuable information hidden in the feature space is lost during the mapping process. To this end, this study considers bidirectional projections function which can be applied in LE and LDL problems simultaneously. More specifically, this novel loss function not only considers the mapping errors generated from the projection of the input space into the output one but also accounts for the reconstruction errors generated from the projection of the output space back to the input one. This loss function aims to potentially reconstruct the input data from the output data. Therefore, it is expected to obtain more accurate results. Finally, experiments on several real-world datasets are carried out to demonstrate the superiority of the proposed method for both LE and LDL.


page 11

page 12

page 13


Learning Discriminative Features using Multi-label Dual Space

Multi-label learning handles instances associated with multiple class la...

Data Augmentation For Label Enhancement

Label distribution (LD) uses the description degree to describe instance...

Contrastive Label Enhancement

Label distribution learning (LDL) is a new machine learning paradigm for...

Label Distribution Learning from Logical Label

Label distribution learning (LDL) is an effective method to predict the ...

Compact Learning for Multi-Label Classification

Multi-label classification (MLC) studies the problem where each instance...

On the benefits of output sparsity for multi-label classification

The multi-label classification framework, where each observation can be ...

Distributionally Robust Multi-Output Regression Ranking

Despite their empirical success, most existing listwiselearning-to-rank ...

Please sign up or login with your details

Forgot password? Click here to reset