Loading…
Mixed Reality Annotation of Robotic-Assisted Surgery videos with real- time tracking and stereo matching
Robotic-Assisted Surgery (RAS) is beginning to unlock its potential. However, despite the latest advances in RAS, the steep learning curve of RAS devices remains a problem. A common teaching resource in surgery is the use of videos of previous procedures, which in RAS are almost always stereoscopic....
Saved in:
Published in: | Computers & graphics 2023-02, Vol.110, p.125-140 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Robotic-Assisted Surgery (RAS) is beginning to unlock its potential. However, despite the latest advances in RAS, the steep learning curve of RAS devices remains a problem. A common teaching resource in surgery is the use of videos of previous procedures, which in RAS are almost always stereoscopic. It is important to be able to add virtual annotations onto these videos so that certain elements of the surgical process are tracked and highlighted during the teaching session. Including virtual annotations in stereoscopic videos turns them into Mixed Reality (MR) experiences, in which tissues, tools and procedures are better observed. However, an MR-based annotation of objects requires tracking and some kind of depth estimation. For this reason, this paper proposes a real-time hybrid tracking–matching method for performing virtual annotations on RAS videos. The proposed method is hybrid because it combines tracking and stereo matching, avoiding the need to calculate the real depth of the pixels. The method was tested with six different state-of-the-art trackers and assessed with videos of a sigmoidectomy of a sigma neoplasia, performed with a Da Vinci® X surgical system. Objective assessment metrics are proposed, presented and calculated for the different solutions. The results show that the method can successfully annotate RAS videos in real-time. Of all the trackers tested for the presented method, the CSRT (Channel and Spatial Reliability Tracking) tracker seems to be the most reliable and robust in terms of tracking capabilities. In addition, in the absence of an absolute ground truth, an assessment with a domain expert using a novel continuous-rating method with an Oculus Quest 2 Virtual Reality device was performed, showing that the depth perception of the virtual annotations is good, despite the fact that no absolute depth values are calculated.
[Display omitted]
•Enhancing videos of robotic surgery with virtual annotations is useful for training.•Tracking and depth estimation often required for annotation of stereoscopic videos.•A real-time tracking–matching framework is proposed; true pixel depth not necessary.•Validation with objective metrics and a domain expert using a continuous rating.•CSRT-based method provides suitable tracking and perception of virtual annotations. |
---|---|
ISSN: | 0097-8493 1873-7684 |
DOI: | 10.1016/j.cag.2022.12.006 |