Keyword search (4,163 papers available)

"vision" Keyword-tagged Publications:

Title Authors PubMed ID
1 Attention-Fusion-Based Two-Stream Vision Transformer for Heart Sound Classification Ranipa K; Zhu WP; Swamy MNS; 41155032
ENCS
2 Lung Nodule Malignancy Classification Integrating Deep and Radiomic Features in a Three-Way Attention-Based Fusion Module Khademi S; Heidarian S; Afshar P; Mohammadi A; Sidiqi A; Nguyen ET; Ganeshan B; Oikonomou A; 41150036
ENCS
3 MedCLIP-SAMv2: Towards universal text-driven medical image segmentation Koleilat T; Asgariandehkordi H; Rivaz H; Xiao Y; 40779830
ENCS
4 Inferring concussion history in athletes using pose and ground reaction force estimation and stability analysis of plyometric exercise videos Alves W; Babouras A; Martineau PA; Schutt D; Robbins S; Fevens T; 40632382
ENCS
5 Real-time motion detection using dynamic mode decomposition Mignacca M; Brugiapaglia S; Bramburger JJ; 40421310
MATHSTATS
6 Deep neural network-based robotic visual servoing for satellite target tracking Ghiasvand S; Xie WF; Mohebbi A; 39440297
ENCS
7 Masters students' satisfaction with academic supervision and experiences of mental and emotional distress and wellbeing Nadine S Bekkouche 38848331
EDUCATION
8 Comparing novel smartphone pose estimation frameworks with the Kinect V2 for knee tracking during athletic stress tests Babouras A; Abdelnour P; Fevens T; Martineau PA; 38730186
ENCS
9 Breamy: An augmented reality mHealth prototype for surgical decision-making in breast cancer Najafi N; Addie M; Meterissian S; Kersten-Oertel M; 38638506
ENCS
10 CosSIF: Cosine similarity-based image filtering to overcome low inter-class variation in synthetic medical image datasets Islam M; Zunair H; Mohammed N; 38492455
ENCS
11 Intersection of Intimate Partner Violence, Partner Interference, and Family Supportive Supervision on Victims' Work Withdrawal Isola C; Granger S; Turner N; LeBlanc MM; Barling J; 37359457
JMSB
12 Single Digit Index Finger Amputation-To Replant or Not? Thibedeau M; Ramji M; McKenzie M; Yeung J; Nickerson DA; 36755823
BIOLOGY
13 Who's cooking tonight? A time-use study of coupled adults in Toronto, Canada Liu B; Widener MJ; Smith LG; Farber S; Gesink D; Minaker LM; Patterson Z; Larsen K; Gilliland J; 36339032
ENCS
14 A Newly Identified Impairment in Both Vision and Hearing Increases the Risk of Deterioration in Both Communication and Cognitive Performance Guthrie DM; Williams N; Campos J; Mick P; Orange JB; Pichora-Fuller MK; Savundranayagam MY; Wittich W; Phillips NA; 35859361
PSYCHOLOGY
15 Assessing optimal colour and illumination to facilitate reading: an analysis of print size Morrice E; Murphy C; Soldano V; Addona C; Wittich W; Johnson AP; 34549808
PSYCHOLOGY
16 Assessing optimal colour and illumination to facilitate reading. Morrice E, Murphy C, Soldano V, Addona C, Wittich W, Johnson AP 33533095
PSYCHOLOGY
17 The Relationship Between Cognitive Status and Known Single Nucleotide Polymorphisms in Age-Related Macular Degeneration. Murphy C; Johnson AP; Koenekoop RK; Seiple W; Overbury O; 33178008
PSYCHOLOGY
18 CCCDTD5 recommendations on early non cognitive markers of dementia: A Canadian consensus Montero-Odasso M; Pieruccini-Faria F; Ismail Z; Li K; Lim A; Phillips N; Kamkar N; Sarquis-Adamson Y; Speechley M; Theou O; Verghese J; Wallace L; Camicioli R; 33094146
CRDH
19 The Prevalence of Hearing, Vision, and Dual Sensory Loss in Older Canadians: An Analysis of Data from the Canadian Longitudinal Study on Aging. Mick PT, Hämäläinen A, Kolisang L, Pichora-Fuller MK, Phillips N, Guthrie D, Wittich W 32546290
PSYCHOLOGY
20 Hearing and Cognitive Impairments Increase the Risk of Long-term Care Admissions Williams N; Phillips NA; Wittich W; Campos JL; Mick P; Orange JB; Pichora-Fuller MK; Savundranayagam MY; Guthrie DM; 31911955
PSYCHOLOGY
21 Understanding Events by Eye and Ear: Agent and Verb Drive Non-anticipatory Eye Movements in Dynamic Scenes. de Almeida RG, Di Nardo J, Antal C, von Grünau MW 31649574
PSYCHOLOGY
22 Integration of Growth and Cell Size via the TOR Pathway and the Dot6 Transcription Factor in Candida albicans. Chaillot J, Tebbji F, Mallick J, Sellam A 30593490
BIOLOGY

 

Title:MedCLIP-SAMv2: Towards universal text-driven medical image segmentation
Authors:Koleilat TAsgariandehkordi HRivaz HXiao Y
Link:https://pubmed.ncbi.nlm.nih.gov/40779830/
DOI:10.1016/j.media.2025.103749
Publication:Medical image analysis
Keywords:Foundation modelsText-driven image segmentationVision-language modelsWeakly supervised segmentation
PMID:40779830 Category: Date Added:2025-08-09
Dept Affiliation: ENCS
1 Department of Electrical and Computer Engineering, Concordia University, Montreal, Quebec, Canada. Electronic address: taha.koleilat@mail.concordia.ca.
2 Department of Electrical and Computer Engineering, Concordia University, Montreal, Quebec, Canada.
3 Department of Computer Science and Software Engineering, Concordia University, Montreal, Quebec, Canada.

Description:

Segmentation of anatomical structures and pathologies in medical images is essential for modern disease diagnosis, clinical research, and treatment planning. While significant advancements have been made in deep learning-based segmentation techniques, many of these methods still suffer from limitations in data efficiency, generalizability, and interactivity. As a result, developing robust segmentation methods that require fewer labeled datasets remains a critical challenge in medical image analysis. Recently, the introduction of foundation models like CLIP and Segment-Anything-Model (SAM), with robust cross-domain representations, has paved the way for interactive and universal image segmentation. However, further exploration of these models for data-efficient segmentation in medical imaging is an active field of research. In this paper, we introduce MedCLIP-SAMv2, a novel framework that integrates the CLIP and SAM models to perform segmentation on clinical scans using text prompts, in both zero-shot and weakly supervised settings. Our approach includes fine-tuning the BiomedCLIP model with a new Decoupled Hard Negative Noise Contrastive Estimation (DHN-NCE) loss, and leveraging the Multi-modal Information Bottleneck (M2IB) to create visual prompts for generating segmentation masks with SAM in the zero-shot setting. We also investigate using zero-shot segmentation labels in a weakly supervised paradigm to enhance segmentation quality further. Extensive validation across four diverse segmentation tasks and medical imaging modalities (breast tumor ultrasound, brain tumor MRI, lung X-ray, and lung CT) demonstrates the high accuracy of our proposed framework. Our code is available at https://github.com/HealthX-Lab/MedCLIP-SAMv2.





BookR developed by Sriram Narayanan
for the Concordia University School of Health
Copyright © 2011-2026
Cookie settings
Concordia University