Search publications

Reset filters Search by keyword

No publications found.

 

MedCLIP-SAMv2: Towards universal text-driven medical image segmentation

Authors: Koleilat TAsgariandehkordi HRivaz HXiao Y


Affiliations

1 Department of Electrical and Computer Engineering, Concordia University, Montreal, Quebec, Canada. Electronic address: taha.koleilat@mail.concordia.ca.
2 Department of Electrical and Computer Engineering, Concordia University, Montreal, Quebec, Canada.
3 Department of Computer Science and Software Engineering, Concordia University, Montreal, Quebec, Canada.

Description

Segmentation of anatomical structures and pathologies in medical images is essential for modern disease diagnosis, clinical research, and treatment planning. While significant advancements have been made in deep learning-based segmentation techniques, many of these methods still suffer from limitations in data efficiency, generalizability, and interactivity. As a result, developing robust segmentation methods that require fewer labeled datasets remains a critical challenge in medical image analysis. Recently, the introduction of foundation models like CLIP and Segment-Anything-Model (SAM), with robust cross-domain representations, has paved the way for interactive and universal image segmentation. However, further exploration of these models for data-efficient segmentation in medical imaging is an active field of research. In this paper, we introduce MedCLIP-SAMv2, a novel framework that integrates the CLIP and SAM models to perform segmentation on clinical scans using text prompts, in both zero-shot and weakly supervised settings. Our approach includes fine-tuning the BiomedCLIP model with a new Decoupled Hard Negative Noise Contrastive Estimation (DHN-NCE) loss, and leveraging the Multi-modal Information Bottleneck (M2IB) to create visual prompts for generating segmentation masks with SAM in the zero-shot setting. We also investigate using zero-shot segmentation labels in a weakly supervised paradigm to enhance segmentation quality further. Extensive validation across four diverse segmentation tasks and medical imaging modalities (breast tumor ultrasound, brain tumor MRI, lung X-ray, and lung CT) demonstrates the high accuracy of our proposed framework. Our code is available at https://github.com/HealthX-Lab/MedCLIP-SAMv2.


Keywords: Foundation modelsText-driven image segmentationVision-language modelsWeakly supervised segmentation


Links

PubMed: https://pubmed.ncbi.nlm.nih.gov/40779830/

DOI: 10.1016/j.media.2025.103749