Keyword search (4,163 papers available)

"Decision support" Keyword-tagged Publications:

Title Authors PubMed ID
1 Towards user-centered interactive medical image segmentation in VR with an assistive AI agent Spiegler P; Harirpoush A; Xiao Y; 41509996
ENCS
2 GOOSM: A GIS-based offshore oil spill management tool for enhanced response and preparedness Yang Z; Chen Z; Lee K; 40279774
ENCS
3 An intelligent decision support system for groundwater supply management and electromechanical infrastructure controls Ataei P; Takhtravan A; Gheibi M; Chahkandi B; Faramarz MG; Waclawek S; Fathollahi-Fard AM; Behzadian K; 38317976
ENCS
4 Development and validation of risk of CPS decline (RCD): a new prediction tool for worsening cognitive performance among home care clients in Canada Guthrie DM; Williams N; O' Rourke HM; Orange JB; Phillips N; Pichora-Fuller MK; Savundranayagam MY; Sutradhar R; 38041046
CRDH

 

Title:Towards user-centered interactive medical image segmentation in VR with an assistive AI agent
Authors:Spiegler PHarirpoush AXiao Y
Link:https://pubmed.ncbi.nlm.nih.gov/41509996/
DOI:10.1007/s10055-025-01284-0
Publication:Virtual reality
Keywords:AI agentAttention switchingClinical decision supportEye trackingFoundation modelHuman-in-the-loopMedical image segmentationMedical visualizationVirtual reality
PMID:41509996 Category: Date Added:2026-01-09
Dept Affiliation: ENCS
1 Department of Computer Science and Software Engineering, Concordia University, Montreal, Quebec Canada.

Description:

Crucial in disease analysis and surgical planning, manual segmentation of volumetric medical scans (e.g. MRI, CT) is laborious, error-prone, and challenging to master, while fully automatic algorithms can benefit from user feedback. Therefore, with the complementary power of the latest radiological AI foundation models and virtual reality (VR)'s intuitive data interaction, we propose SAMIRA, a novel conversational AI agent for medical VR that assists users with localizing, segmenting, and visualizing 3D medical concepts. Through speech-based interaction, the agent helps users understand radiological features, locate clinical targets, and generate segmentation masks that can be refined with just a few point prompts. The system also supports true-to-scale 3D visualization of segmented pathology to enhance patient-specific anatomical understanding. Furthermore, to determine the optimal interaction paradigm under near-far attention-switching for refining segmentation masks in an immersive, human-in-the-loop workflow, we compare VR controller pointing, head pointing, and eye tracking as input modes. With a user study, evaluations demonstrated a high usability score (SUS = 90.0 ± 9.0), low overall task load, as well as strong support for the proposed VR system's guidance, training potential, and integration of AI in radiological segmentation tasks.





BookR developed by Sriram Narayanan
for the Concordia University School of Health
Copyright © 2011-2026
Cookie settings
Concordia University