Keyword search (4,163 papers available)

"augmented reality" Keyword-tagged Publications:

Title Authors PubMed ID
1 From tissue to sound: A new paradigm for medical sonic interaction design Matinfar S; Dehghani S; Salehi M; Sommersperger M; Navab N; Faridpooya K; Fairhurst M; Navab N; 40222195
CONCORDIA
2 iSurgARy: A mobile augmented reality solution for ventriculostomy in resource-limited settings Asadi Z; Castillo JP; Asadi M; Sinclair DS; Kersten-Oertel M; 39816703
ENCS
3 A usability analysis of augmented reality and haptics for surgical planning Kazemipour N; Hooshiar A; Kersten-Oertel M; 38942947
ENCS
4 Virtual and Augmented Reality in Ventriculostomy: A Systematic Review Alizadeh M; Xiao Y; Kersten-Oertel M; 38823448
ENCS
5 A decade of progress: bringing mixed reality image-guided surgery systems in the operating room Asadi Z; Asadi M; Kazemipour N; Léger É; Kersten-Oertel M; 38794834
ENCS
6 Breamy: An augmented reality mHealth prototype for surgical decision-making in breast cancer Najafi N; Addie M; Meterissian S; Kersten-Oertel M; 38638506
ENCS
7 MARIN: an open-source mobile augmented reality interactive neuronavigation system. Léger É; Reyes J; Drouin S; Popa T; Hall JA; Collins DL; Kersten-Oertel M; 32323206
PERFORM
8 Augmented reality mastectomy surgical planning prototype using the HoloLens template for healthcare technology letters. Amini S, Kersten-Oertel M 32038868
PERFORM
9 Quantifying attention shifts in augmented reality image-guided neurosurgery. Léger É, Drouin S, Collins DL, Popa T, Kersten-Oertel M 29184663
PERFORM
10 Combining intraoperative ultrasound brain shift correction and augmented reality visualizations: a pilot study of eight cases. Gerard IJ, Kersten-Oertel M, Drouin S, Hall JA, Petrecca K, De Nigris D, Di Giovanni DA, Arbel T, Collins DL 29392162
PERFORM
11 Gesture-based registration correction using a mobile augmented reality image-guided neurosurgery system. Léger É, Reyes J, Drouin S, Collins DL, Popa T, Kersten-Oertel M 30800320
PERFORM

 

Title:From tissue to sound: A new paradigm for medical sonic interaction design
Authors:Matinfar SDehghani SSalehi MSommersperger MNavab NFaridpooya KFairhurst MNavab N
Link:https://pubmed.ncbi.nlm.nih.gov/40222195/
DOI:10.1016/j.media.2025.103571
Publication:Medical image analysis
Keywords:Auditory feedbackAugmented realityMedical imagingMixed realityModel-based sonificationPhysical modeling synthesisSonification
PMID:40222195 Category: Date Added:2025-04-14
Dept Affiliation: CONCORDIA
1 Computer Aided Medical Procedures (CAMP), Technical University of Munich, Munich, Germany. Electronic address: sasan.matinfar@tum.de.
2 Computer Aided Medical Procedures (CAMP), Technical University of Munich, Munich, Germany.
3 Topological Media Lab, Concordia University, Montreal, Canada.
4 Rotterdam Eye Hospital, Rotterdam, The Netherlands.
5 Centre for Tactile Internet with Human-in-the-Loop, Technical University of Dresden, Dresden, Germany.

Description:

Medical imaging maps tissue characteristics into image intensity values, enhancing human perception. However, comprehending this data, especially in high-stakes scenarios such as surgery, is prone to errors. Additionally, current multimodal methods do not fully leverage this valuable data in their design. We introduce "From Tissue to Sound," a new paradigm for medical sonic interaction design. This paradigm establishes a comprehensive framework for mapping tissue characteristics to auditory displays, providing dynamic and intuitive access to medical images that complement visual data, thereby enhancing multimodal perception. "From Tissue to Sound" provides an advanced and adaptable framework for the interactive sonification of multimodal medical imaging data. This framework employs a physics-based sound model composed of a network of multiple oscillators, whose mechanical properties-such as friction and stiffness-are defined by tissue characteristics extracted from imaging data. This approach enables the representation of anatomical structures and the creation of unique acoustic profiles in response to excitations of the sound model. This method allows users to explore data at a fundamental level, identifying tissue characteristics ranging from rigid to soft, dense to sparse, and structured to scattered. It facilitates intuitive discovery of both general and detailed patterns with minimal preprocessing. Unlike conventional methods that transform low-dimensional data into global sound features through a parametric approach, this method utilizes model-based unsupervised mapping between data and an anatomical sound model, enabling high-dimensional data processing. The versatility of this method is demonstrated through feasibility experiments confirming the generation of perceptually discernible acoustic signals. Furthermore, we present a novel application developed based on this framework for retinal surgery. This new paradigm opens up possibilities for designing multisensory applications for multimodal imaging data. It also facilitates the creation of interactive sonification models with various auditory causality approaches, enhancing both directness and richness.





BookR developed by Sriram Narayanan
for the Concordia University School of Health
Copyright © 2011-2026
Cookie settings
Concordia University