Authors: Jones Z, Drouin S, Kersten-Oertel M
Purpose: Virtual reality (VR) can offer immersive platforms for segmenting complex medical images to facilitate a better understanding of anatomical structures for training, diagnosis, surgical planning, and treatment evaluation. These applications rely on user interaction within the VR environment to manipulate and interpret medical data. However, the optimal interaction schemes and input devices for segmentation tasks in VR remain unclear. This study compares user performance and experience using two different input schemes.
Methods: Twelve participants segmented 6 CT/MRI images using two input methods: keyboard and mouse (KBM) and motion controllers (MCs). Performance was assessed using accuracy, completion time, and efficiency. A post-task questionnaire measured users' perceived performance and experience.
Results: No significant overall time difference was observed between the two input methods, though KBM was faster for larger segmentation tasks. Accuracy was consistent across input schemes. Participants rated both methods as equally challenging, with similar efficiency levels, but found MCs more enjoyable to use.
Conclusion: These findings suggest that VR segmentation software should support flexible input options tailored to task complexity. Future work should explore enhancements to motion controller interfaces to improve usability and user experience.
Keywords: Contours; Interaction methods; Radiology; Segmentation; Virtual reality;
PubMed: https://pubmed.ncbi.nlm.nih.gov/40402355/
DOI: 10.1007/s11548-025-03424-y