Deep Semi-supervised Learning with Double-Contrast of Features and Semantics

11/28/2022
by   Quan Feng, et al.
0

In recent years, the field of intelligent transportation systems (ITS) has achieved remarkable success, which is mainly due to the large amount of available annotation data. However, obtaining these annotated data has to afford expensive costs in reality. Therefore, a more realistic strategy is to leverage semi-supervised learning (SSL) with a small amount of labeled data and a large amount of unlabeled data. Typically, semantic consistency regularization and the two-stage learning methods of decoupling feature extraction and classification have been proven effective. Nevertheless, representation learning only limited to semantic consistency regularization may not guarantee the separation or discriminability of representations of samples with different semantics; due to the inherent limitations of the two-stage learning methods, the extracted features may not match the specific downstream tasks. In order to deal with the above drawbacks, this paper proposes an end-to-end deep semi-supervised learning double contrast of semantic and feature, which extracts effective tasks specific discriminative features by contrasting the semantics/features of positive and negative augmented samples pairs. Moreover, we leverage information theory to explain the rationality of double contrast of semantics and features and slack mutual information to contrastive loss in a simpler way. Finally, the effectiveness of our method is verified in benchmark datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/17/2022

Contrastive Regularization for Semi-Supervised Learning

Consistency regularization on label predictions becomes a fundamental te...
research
10/06/2021

ActiveMatch: End-to-end Semi-supervised Active Representation Learning

Semi-supervised learning (SSL) is an efficient framework that can train ...
research
11/11/2021

Semantic-aware Representation Learning Via Probability Contrastive Loss

Recent feature contrastive learning (FCL) has shown promising performanc...
research
02/07/2021

Self-supervised driven consistency training for annotation efficient histopathology image analysis

Training a neural network with a large labeled dataset is still a domina...
research
02/12/2021

ReRankMatch: Semi-Supervised Learning with Semantics-Oriented Similarity Representation

This paper proposes integrating semantics-oriented similarity representa...
research
10/05/2020

CO2: Consistent Contrast for Unsupervised Visual Representation Learning

Contrastive learning has been adopted as a core method for unsupervised ...
research
03/04/2020

Semixup: In- and Out-of-Manifold Regularization for Deep Semi-Supervised Knee Osteoarthritis Severity Grading from Plain Radiographs

Knee osteoarthritis (OA) is one of the highest disability factors in the...

Please sign up or login with your details

Forgot password? Click here to reset