Keyword search (4,164 papers available)

"contrast" Keyword-tagged Publications:

Title Authors PubMed ID
1 Disentangled representation learning for multi-view clustering via von Mises-Fisher hyperspherical embedding Li Z; Luo Z; Bouguila N; Su W; Fan W; 40664160
ENCS
2 Joint enhancement of automatic chest x-ray diagnosis and radiological gaze prediction with multistage cooperative learning Qiu Z; Rivaz H; Xiao Y; 40665596
ENCS
3 Investigation of Phase-Change Droplets and Fast Imaging for Indicator Dilution Measurement of Flow Zajac Z; Helfield B; Williams R; Sheeran P; Tremblay-Darveau C; Yoo K; Burns PN; 40387284
BIOLOGY
4 The effect of micro-vessel viscosity on the resonance response of a two-microbubble system Yusefi H; Helfield B; 39705920
BIOLOGY
5 Context changes judgments of liking and predictability for melodies Albury AW; Bianco R; Gold BP; Penhune VB; 38034280
PSYCHOLOGY
6 Investigating the Accumulation of Submicron Phase-Change Droplets in Tumors. Helfield BL, Yoo K, Liu J, Williams R, Sheeran PS, Goertz DE, Burns PN 32732167
BIOLOGY
7 Simulation of Capillary Hemodynamics and Comparison with Experimental Results of Microphantom Perfusion Weighted Imaging. S S, N RA 32637373
PHYSICS
8 A dataset of multi-contrast population-averaged brain MRI atlases of a Parkinson׳s disease cohort. Xiao Y, Fonov V, Chakravarty MM, Beriault S, Al Subaie F, Sadikot A, Pike GB, Bertrand G, Collins DL 28491942
PERFORM

 

Title:Joint enhancement of automatic chest x-ray diagnosis and radiological gaze prediction with multistage cooperative learning
Authors:Qiu ZRivaz HXiao Y
Link:https://pubmed.ncbi.nlm.nih.gov/40665596/
DOI:10.1002/mp.17977
Publication:Medical physics
Keywords:computer‐assisted diagnosiscontrastive learningmultitask learningvisual attentionx‐ray
PMID:40665596 Category: Date Added:2025-07-16
Dept Affiliation: ENCS
1 Department of Computer Science and Software Engineering, Concordia University, Montreal, Quebec, Canada.
2 Department of Electrical and Computer Engineering, Concordia University, Montreal, Quebec, Canada.

Description:

Background: As visual inspection is an inherent process during radiological screening, the associated eye gaze data can provide valuable insights into relevant clinical decision processes and facilitate computer-assisted diagnosis. However, the relevant techniques are still under-explored.

Purpose: With deep learning becoming the state-of-the-art for computer-assisted diagnosis, integrating human behavior, such as eye gaze data, into these systems is instrumental to help guide machine predictions with clinical diagnostic criteria, thus enhancing the quality of automatic radiological diagnosis. In addition, the ability to predict a radiologist's gaze saliency from a clinical scan along with the automatic diagnostic result could be instrumental for the end users.

Methods: We propose a novel deep learning framework for joint disease diagnosis and prediction of corresponding radiological gaze saliency maps for chest x-ray scans. Specifically, we introduce a new dual-encoder multitask UNet, which leverages both a DenseNet201 backbone and a Residual and Squeeze-and-Excitation block-based encoder to extract diverse features for visual saliency map prediction and a multiscale feature-fusion classifier to perform disease classification. To tackle the issue of asynchronous training schedules of individual tasks in multitask learning, we propose a multistage cooperative learning strategy, with contrastive learning for feature encoder pretraining to boost performance.

Results: Our proposed method is shown to significantly outperform existing techniques for chest radiography diagnosis (AUC = 0.93) and the quality of visual saliency map prediction (correlation coefficient = 0.58).

Conclusion: Benefiting from the proposed multitask, multistage cooperative learning, our technique demonstrates the benefit of integrating clinicians' eye gaze into radiological AI systems to boost performance and potentially explainability.





BookR developed by Sriram Narayanan
for the Concordia University School of Health
Copyright © 2011-2026
Cookie settings
Concordia University