Loading…

Occlusion-Robust Visual Markerless Bone Tracking for Computer-Assisted Orthopedic Surgery

Conventional computer-assisted orthopedic navigation systems rely on the tracking of dedicated optical markers for patient poses, which makes the surgical workflow more invasive, tedious, and expensive. To address this limitation, some previous studies have successfully adapted the existing deep lea...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on instrumentation and measurement 2022, Vol.71, p.1-11
Main Authors: Hu, Xue, Nguyen, Anh, Baena, Ferdinando Rodriguez y
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Conventional computer-assisted orthopedic navigation systems rely on the tracking of dedicated optical markers for patient poses, which makes the surgical workflow more invasive, tedious, and expensive. To address this limitation, some previous studies have successfully adapted the existing deep learning framework to automatically segment and register the exposed femur surface for knee surgery, but these fail under real-world occlusion, which often happens during a real surgical procedure. Furthermore, such methods are hardware-specific and not accurate enough for clinic acceptance. In this article, we propose a learning-based RGB-D markerless tracking method that is robust against occlusion. To avoid expensive surgical data collection, which is a well-known challenge for surgical task training, we generate synthetic RGB-D data with various occlusion scenarios. A new segmentation network that features dynamic region-of-interest prediction and 3-D geometric segmentation is designed, to learn the occlusion-related knowledge from simulated instances. Intensive experiments show that our proposed method achieves new state-of-the-art results in markerless bone tracking. Furthermore, our method generalizes well to new cameras and new target models, including a cadaver, without the need for network retraining. By using a high-quality RGB-D camera, our proposed visual tracking method can achieve an accuracy of 1°-2° and 2-4 mm on a phantom knee, which meets the clinical standard.
ISSN:0018-9456
1557-9662
DOI:10.1109/TIM.2021.3134764