Title | Open source platform for automated collection of training data to support video-based feedback in surgical simulators |
Publication Type | Conference Abstract |
Year of Publication | 2020 |
Authors | Laframboise, J., Ungi T., Sunderland K. R., Zevin B., & Fichtinger G. |
Conference Name | SPIE Medical Imaging |
Publisher | SPIE |
Conference Location | Houston, United States |
Keywords | 3D Slicer, AI, annotation, collection, data, deep learning, PLUS, surgical training, video |
Abstract | Purpose: Surgical training could be improved by automatic detection of workflow steps. A platform to collect and organize tracking and video data would enable rapid development of deep learning solutions for surgical training. The purpose of this research is to demonstrate 3D Slicer / PLUS Toolkit as a platform for video annotation by identifying and annotating tools interacting with tissues in simulated hernia repair. Methods: Tracking data from an optical tracker and video data from a camera are collected by PLUS and 3D Slicer. To demonstrate the platform in use, we identify tissues during a surgical procedure using a neural network. The tracking data is used to identify what tool is in use. The solution is deployed with a custom Slicer module. Results: This platform allowed the collection and storage of enough tracked video data for training a convolutional neural network (CNN) to detect interactions with tissues and tools. The CNN was trained on this data and applied to new data with a testing accuracy of 98%. The model’s predictions can be weighted over several frames with a custom Slicer module to improve accuracy. Conclusion: We found the 3D Slicer and PLUS Toolkit platform to be a viable platform for training and deploying a solution that combines automatic video processing and optical tool tracking. We designed a proof of concept model to identify tissues with a trained CNN in real time along with tracking of surgical tools. |
PerkWeb Citation Key | Laframboise2020a |
Full Text |
|