Presentation Authors: Aniruddh Puranic, Jian Chen*, Jessica Nguyen, Jyotirmoy Deshmukh, Andrew Hung, Los Angeles, CA
Introduction: Due to the lack of instrument force feedback during robot-assisted surgery, tissue-handling technique is an important aspect of surgical performance to assess. Herein, we present preliminary data (object detection, distance prediction) on our process to develop a vision-based machine learning (VBML) algorithm to measure needle entry point deviation in tissue during robotic suturing as a proxy for tissue trauma.
Methods: The VBML algorithm was developed on a robotic &[Prime]suture sponge&[Prime]. 1) Algorithm training: Inked black dots on the yellow sponge were detected on the endoscopic view by an image segmentation algorithm, using a sequence of image processing techniques. The distance between a given pair of dots was measured by the algorithm in pixels. The actual distance (ground truth) between these two dots was also manually measured on the sponge. A linear regression machine learning algorithm was trained using the input/output data, where the input data (i.e. the training feature) is the distance on endoscopic view in cm., and the output data is the true distance in cm. 2) Algorithm testing: We used the trained algorithm to predict the distance of two dots on new endoscopic views. We then compared the predicted distance to the ground truth measurements on the sponge. 3) Algorithm application to videos: Once tested, we used the algorithm to keep track of the needle entry point and the dots on individual frames throughout the video.
Results: During the image processing, the image segmentation algorithm successfully detected all 27 black dots on the suture sponge. On average, six noise points were detected per image. Distance between a given pair of dots, as predicted by the algorithm, was strongly correlated to the ground truth measurements (2.78Â±1.15 cm vs. 2.8Â±1.20 cm, r=0.99, p < 0.001). On the training data, the algorithm achieved an error of 0.095Â±0.055 cm. On the test data, the algorithm achieved an error of 0.11Â±0.019 cm.
Conclusions: In this study, we successfully developed a VBML algorithm to detect needles and objects during a robotic &[Prime]suture sponge exercise&[Prime]. The algorithm could accurately predict the distance between two objects on the robotic endoscopic view. Our next step is to implement convolutional neural networks and temporal clustering algorithms to recognize the activity being performed (i.e. needle driving). The sequential execution of all these steps will be used to determine needle entry point and its deviation during a suturing movement as a proxy for tissue trauma and measure of instrument force sensitivity.