Quantification of Robotic Surgeries with Vision-Based Deep Learning

by   Dani Kiyasseh, et al.

Surgery is a high-stakes domain where surgeons must navigate critical anatomical structures and actively avoid potential complications while achieving the main task at hand. Such surgical activity has been shown to affect long-term patient outcomes. To better understand this relationship, whose mechanics remain unknown for the majority of surgical procedures, we hypothesize that the core elements of surgery must first be quantified in a reliable, objective, and scalable manner. We believe this is a prerequisite for the provision of surgical feedback and modulation of surgeon performance in pursuit of improved patient outcomes. To holistically quantify surgeries, we propose a unified deep learning framework, entitled Roboformer, which operates exclusively on videos recorded during surgery to independently achieve multiple tasks: surgical phase recognition (the what of surgery), gesture classification and skills assessment (the how of surgery). We validated our framework on four video-based datasets of two commonly-encountered types of steps (dissection and suturing) within minimally-invasive robotic surgeries. We demonstrated that our framework can generalize well to unseen videos, surgeons, medical centres, and surgical procedures. We also found that our framework, which naturally lends itself to explainable findings, identified relevant information when achieving a particular task. These findings are likely to instill surgeons with more confidence in our framework's behaviour, increasing the likelihood of clinical adoption, and thus paving the way for more targeted surgical feedback.


page 1

page 5

page 8

page 10


Video-based Formative and Summative Assessment of Surgical Tasks using Deep Learning

To ensure satisfactory clinical outcomes, surgical skill assessment must...

Adaptive Surgical Robotic Training Using Real-Time Stylistic Behavior Feedback Through Haptic Cues

Surgical skill directly affects surgical procedure outcomes; thus, effec...

Symmetric Dilated Convolution for Surgical Gesture Recognition

Automatic surgical gesture recognition is a prerequisite of intra-operat...

Multi-stage Suture Detection for Robot Assisted Anastomosis based on Deep Learning

In robotic surgery, task automation and learning from demonstration comb...

Novel evaluation of surgical activity recognition models using task-based efficiency metrics

Purpose: Surgical task-based metrics (rather than entire procedure metri...

A real-time spatiotemporal AI model analyzes skill in open surgical videos

Open procedures represent the dominant form of surgery worldwide. Artifi...

Please sign up or login with your details

Forgot password? Click here to reset