SAM^Med: A medical image annotation framework based on large vision model

07/11/2023
by   Chenglong Wang, et al.
0

Recently, large vision model, Segment Anything Model (SAM), has revolutionized the computer vision field, especially for image segmentation. SAM presented a new promptable segmentation paradigm that exhibit its remarkable zero-shot generalization ability. An extensive researches have explore the potential and limits of SAM in various downstream tasks. In this study, we presents SAM^Med, an enhanced framework for medical image annotation that leverages the capabilities of SAM. SAM^Med framework consisted of two submodules, namely SAM^assist and SAM^auto. The SAM^assist demonstrates the generalization ability of SAM to the downstream medical segmentation task using the prompt-learning approach. Results show a significant improvement in segmentation accuracy with only approximately 5 input points. The SAM^auto model aims to accelerate the annotation process by automatically generating input prompts. The proposed SAP-Net model achieves superior segmentation performance with only five annotated slices, achieving an average Dice coefficient of 0.80 and 0.82 for kidney and liver segmentation, respectively. Overall, SAM^Med demonstrates promising results in medical image annotation. These findings highlight the potential of leveraging large-scale vision models in medical image annotation tasks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset