Anatomy-Aware Contrastive Representation Learning for Fetal Ultrasound

by   Zeyu Fu, et al.

Self-supervised contrastive representation learning offers the advantage of learning meaningful visual representations from unlabeled medical datasets for transfer learning. However, applying current contrastive learning approaches to medical data without considering its domain-specific anatomical characteristics may lead to visual representations that are inconsistent in appearance and semantics. In this paper, we propose to improve visual representations of medical images via anatomy-aware contrastive learning (AWCL), which incorporates anatomy information to augment the positive/negative pair sampling in a contrastive learning manner. The proposed approach is demonstrated for automated fetal ultrasound imaging tasks, enabling the positive pairs from the same or different ultrasound scans that are anatomically similar to be pulled together and thus improving the representation learning. We empirically investigate the effect of inclusion of anatomy information with coarse- and fine-grained granularity, for contrastive learning and find that learning with fine-grained anatomy information which preserves intra-class difference is more effective than its counterpart. We also analyze the impact of anatomy ratio on our AWCL framework and find that using more distinct but anatomically similar samples to compose positive pairs results in better quality representations. Experiments on a large-scale fetal ultrasound dataset demonstrate that our approach is effective for learning representations that transfer well to three clinical downstream tasks, and achieves superior performance compared to ImageNet supervised and the current state-of-the-art contrastive learning methods. In particular, AWCL outperforms ImageNet supervised method by 13.8 and state-of-the-art contrastive-based method by 7.1 segmentation task.


page 2

page 9

page 10


Multi-Task Self-Supervised Time-Series Representation Learning

Time-series representation learning can extract representations from dat...

Learn by Challenging Yourself: Contrastive Visual Representation Learning with Hard Sample Generation

Contrastive learning (CL), a self-supervised learning approach, can effe...

Self-supervised Contrastive Video-Speech Representation Learning for Ultrasound

In medical imaging, manual annotations can be expensive to acquire and s...

Generating and Weighting Semantically Consistent Sample Pairs for Ultrasound Contrastive Learning

Well-annotated medical datasets enable deep neural networks (DNNs) to ga...

Improving Contrastive Learning on Visually Homogeneous Mars Rover Images

Contrastive learning has recently demonstrated superior performance to s...

Capturing Fine-grained Semantics in Contrastive Graph Representation Learning

Graph contrastive learning defines a contrastive task to pull similar in...

Asymmetric Patch Sampling for Contrastive Learning

Asymmetric appearance between positive pair effectively reduces the risk...

Please sign up or login with your details

Forgot password? Click here to reset