Search publications

Reset filters Search by keyword

No publications found.

 

Lung Nodule Malignancy Classification Integrating Deep and Radiomic Features in a Three-Way Attention-Based Fusion Module

Authors: Khademi SHeidarian SAfshar PMohammadi ASidiqi ANguyen ETGaneshan BOikonomou A


Affiliations

1 Concordia Institute for Information Systems Engineering, Montreal, QC H3G 1M8, Canada.
2 Department of Electrical and Computer Engineering, Concordia University, Montreal, QC H3G 1M8, Canada.
3 Department of Medical Imaging, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, ON M4N 3M5, Canada.
4 Department of Medical Imaging, University Health Network, University of Toronto, Toronto, ON M5G 2N2, Canada.
5 Institute of Nuclear Medicine, University College London, 235 Euston Road, London NW1 2BU, UK.

Description

In this study, we propose a novel hybrid framework for assessing the invasiveness of an in-house dataset of 114 pathologically proven lung adenocarcinomas presenting as subsolid nodules on Computed Tomography (CT). Nodules were classified into group 1 (G1), which included atypical adenomatous hyperplasia, adenocarcinoma in situ, and minimally invasive adenocarcinomas, and group 2 (G2), which included invasive adenocarcinomas. Our approach includes a three-way Integration of Visual, Spatial, and Temporal features with Attention, referred to as I-VISTA, obtained from three processing algorithms designed based on Deep Learning (DL) and radiomic models, leading to a more comprehensive analysis of nodule variations. The aforementioned processing algorithms are arranged in the following three parallel paths: (i) The Shifted Window (SWin) Transformer path, which is a hierarchical vision Transformer that extracts nodules' related spatial features; (ii) The Convolutional Auto-Encoder (CAE) Transformer path, which captures informative features related to inter-slice relations via a modified Transformer encoder architecture; and (iii) a 3D Radiomic-based path that collects quantitative features based on texture analysis of each nodule. Extracted feature sets are then passed through the Criss-Cross attention fusion module to discover the most informative feature patterns and classify nodules type. The experiments were evaluated based on a ten-fold cross-validation scheme. I-VISTA framework achieved the best performance of overall accuracy, sensitivity, and specificity (mean ± std) of 93.93 ± 6.80%, 92.66 ± 9.04%, and 94.99 ± 7.63% with an Area under the ROC Curve (AUC) of 0.93 ± 0.08 for lung nodule classification among ten folds. The hybrid framework integrating DL and hand-crafted 3D Radiomic model outperformed the standalone DL and hand-crafted 3D Radiomic model in differentiating G1 from G2 subsolid nodules identified on CT.


Keywords: attention fusionauto-encoderdeep learninglung cancermalignancy classificationvision transformer


Links

PubMed: https://pubmed.ncbi.nlm.nih.gov/41150036/

DOI: 10.3390/jimaging11100360