A Knowledge-based Learning Framework for Self-supervised Pre-training Towards Enhanced Recognition of Medical Images

11/27/2022
by   Wei Chen, et al.
0

Self-supervised pre-training has become the priory choice to establish reliable models for automated recognition of massive medical images, which are routinely annotation-free, without semantics, and without guarantee of quality. Note that this paradigm is still at its infancy and limited by closely related open issues: 1) how to learn robust representations in an unsupervised manner from unlabelled medical images of low diversity in samples? and 2) how to obtain the most significant representations demanded by a high-quality segmentation? Aiming at these issues, this study proposes a knowledge-based learning framework towards enhanced recognition of medical images, which works in three phases by synergizing contrastive learning and generative learning models: 1) Sample Space Diversification: Reconstructive proxy tasks have been enabled to embed a priori knowledge with context highlighted to diversify the expanded sample space; 2) Enhanced Representation Learning: Informative noise-contrastive estimation loss regularizes the encoder to enhance representation learning of annotation-free images; 3) Correlated Optimization: Optimization operations in pre-training the encoder and the decoder have been correlated via image restoration from proxy tasks, targeting the need for semantic segmentation. Extensive experiments have been performed on various public medical image datasets (e.g., CheXpert and DRIVE) against the state-of-the-art counterparts (e.g., SimCLR and MoCo), and results demonstrate that: The proposed framework statistically excels in self-supervised benchmarks, achieving 2.08, 1.23, 1.12, 0.76 and 1.38 percentage points improvements over SimCLR in AUC/Dice. The proposed framework achieves label-efficient semi-supervised learning, e.g., reducing the annotation cost by up to 99

READ FULL TEXT

page 1

page 3

page 4

page 5

research
11/29/2021

Self-Supervised Pre-Training of Swin Transformers for 3D Medical Image Analysis

Vision Transformers (ViT)s have shown great performance in self-supervis...
research
07/14/2020

Learning Semantics-enriched Representation via Self-discovery, Self-classification, and Self-restoration

Medical images are naturally associated with rich semantics about the hu...
research
03/04/2022

MixCL: Pixel label matters to contrastive learning

Contrastive learning and self-supervised techniques have gained prevalen...
research
03/02/2023

Geometric Visual Similarity Learning in 3D Medical Image Self-supervised Pre-training

Learning inter-image similarity is crucial for 3D medical images self-su...
research
04/20/2023

Complex Mixer for MedMNIST Classification Decathlon

With the development of the medical image field, researchers seek to dev...
research
05/31/2023

Additional Positive Enables Better Representation Learning for Medical Images

This paper presents a new way to identify additional positive pairs for ...
research
12/17/2021

Unified 2D and 3D Pre-training for Medical Image classification and Segmentation

Self-supervised learning (SSL) opens up huge opportunities for better ut...

Please sign up or login with your details

Forgot password? Click here to reset