Loading…
Depth over RGB: automatic evaluation of open surgery skills using depth camera
Purpose In this paper, we present a novel approach to the automatic evaluation of open surgery skills using depth cameras. This work is intended to show that depth cameras achieve similar results to RGB cameras, which is the common method in the automatic evaluation of open surgery skills. Moreover,...
Saved in:
Published in: | International journal for computer assisted radiology and surgery 2024, Vol.19 (7), p.1349-1357 |
---|---|
Main Authors: | , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | cdi_FETCH-LOGICAL-c426t-fdc8b98c3228a7ebdaefc410871f34e4e7b98e1c772d9777062c1869b9125f663 |
container_end_page | 1357 |
container_issue | 7 |
container_start_page | 1349 |
container_title | International journal for computer assisted radiology and surgery |
container_volume | 19 |
creator | Zuckerman, Ido Werner, Nicole Kouchly, Jonathan Huston, Emma DiMarco, Shannon DiMusto, Paul Laufer, Shlomi |
description | Purpose
In this paper, we present a novel approach to the automatic evaluation of open surgery skills using depth cameras. This work is intended to show that depth cameras achieve similar results to RGB cameras, which is the common method in the automatic evaluation of open surgery skills. Moreover, depth cameras offer advantages such as robustness to lighting variations, camera positioning, simplified data compression, and enhanced privacy, making them a promising alternative to RGB cameras.
Methods
Experts and novice surgeons completed two simulators of open suturing. We focused on hand and tool detection and action segmentation in suturing procedures. YOLOv8 was used for tool detection in RGB and depth videos. Furthermore, UVAST and MSTCN++ were used for action segmentation. Our study includes the collection and annotation of a dataset recorded with Azure Kinect.
Results
We demonstrated that using depth cameras in object detection and action segmentation achieves comparable results to RGB cameras. Furthermore, we analyzed 3D hand path length, revealing significant differences between experts and novice surgeons, emphasizing the potential of depth cameras in capturing surgical skills. We also investigated the influence of camera angles on measurement accuracy, highlighting the advantages of 3D cameras in providing a more accurate representation of hand movements.
Conclusion
Our research contributes to advancing the field of surgical skill assessment by leveraging depth cameras for more reliable and privacy evaluations. The findings suggest that depth cameras can be valuable in assessing surgical skills and provide a foundation for future research in this area. |
doi_str_mv | 10.1007/s11548-024-03158-3 |
format | article |
fullrecord | <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_11230951</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3106713932</sourcerecordid><originalsourceid>FETCH-LOGICAL-c426t-fdc8b98c3228a7ebdaefc410871f34e4e7b98e1c772d9777062c1869b9125f663</originalsourceid><addsrcrecordid>eNp9kUtP3DAUha0KVJ5_oIvKEptuUnz9iJ1uqnZ4SqhICNaWx7kZQpN4sJOR-Pc1DAXKoitf6Xz32MeHkE_AvgJj-jABKGkKxmXBBChTiA9kG0wJRSl5tfFm3iI7Kd0xJpUW6iPZEkZLw5TYJr-OcDne0rDCSK9Of36jbhpD78bWU1y5bspTGGhoaFjiQNMUFxgfaPrddl2iU2qHBa2fHLzrMbo9stm4LuH-87lLbk6Or2dnxcXl6fnsx0XhJS_Hoqm9mVfGC86N0zivHTZeAjMaGiFRos4qgtea15XWmpXc5zDVvAKumrIUu-T72nc5zXusPQ5jdJ1dxrZ38cEG19p_laG9tYuwsgBcsEpBdvjy7BDD_YRptH2bPHadGzBMyQqmlFSi5DyjB-_QuzDFIeezAlipQVTikeJryseQUsTm5TXA7GNfdt2XzX3Zp76syEuf3-Z4WflbUAbEGkhZGvLnv979H9s_8tWglg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3106713932</pqid></control><display><type>article</type><title>Depth over RGB: automatic evaluation of open surgery skills using depth camera</title><source>Springer Link</source><creator>Zuckerman, Ido ; Werner, Nicole ; Kouchly, Jonathan ; Huston, Emma ; DiMarco, Shannon ; DiMusto, Paul ; Laufer, Shlomi</creator><creatorcontrib>Zuckerman, Ido ; Werner, Nicole ; Kouchly, Jonathan ; Huston, Emma ; DiMarco, Shannon ; DiMusto, Paul ; Laufer, Shlomi</creatorcontrib><description>Purpose
In this paper, we present a novel approach to the automatic evaluation of open surgery skills using depth cameras. This work is intended to show that depth cameras achieve similar results to RGB cameras, which is the common method in the automatic evaluation of open surgery skills. Moreover, depth cameras offer advantages such as robustness to lighting variations, camera positioning, simplified data compression, and enhanced privacy, making them a promising alternative to RGB cameras.
Methods
Experts and novice surgeons completed two simulators of open suturing. We focused on hand and tool detection and action segmentation in suturing procedures. YOLOv8 was used for tool detection in RGB and depth videos. Furthermore, UVAST and MSTCN++ were used for action segmentation. Our study includes the collection and annotation of a dataset recorded with Azure Kinect.
Results
We demonstrated that using depth cameras in object detection and action segmentation achieves comparable results to RGB cameras. Furthermore, we analyzed 3D hand path length, revealing significant differences between experts and novice surgeons, emphasizing the potential of depth cameras in capturing surgical skills. We also investigated the influence of camera angles on measurement accuracy, highlighting the advantages of 3D cameras in providing a more accurate representation of hand movements.
Conclusion
Our research contributes to advancing the field of surgical skill assessment by leveraging depth cameras for more reliable and privacy evaluations. The findings suggest that depth cameras can be valuable in assessing surgical skills and provide a foundation for future research in this area.</description><identifier>ISSN: 1861-6429</identifier><identifier>ISSN: 1861-6410</identifier><identifier>EISSN: 1861-6429</identifier><identifier>DOI: 10.1007/s11548-024-03158-3</identifier><identifier>PMID: 38748053</identifier><language>eng</language><publisher>Cham: Springer International Publishing</publisher><subject>Annotations ; Cameras ; Clinical Competence ; Computer Imaging ; Computer Science ; Data compression ; Evaluation ; Hand tools ; Health Informatics ; Humans ; Imaging ; Imaging, Three-Dimensional - methods ; Medicine ; Medicine & Public Health ; Object recognition ; Original ; Original Article ; Pattern Recognition and Graphics ; Privacy ; Radiology ; Segmentation ; Simulators ; Skills ; Surgeons ; Surgery ; Suture Techniques - education ; Suture Techniques - instrumentation ; Sutures ; Vision</subject><ispartof>International journal for computer assisted radiology and surgery, 2024, Vol.19 (7), p.1349-1357</ispartof><rights>The Author(s) 2024</rights><rights>2024. The Author(s).</rights><rights>The Author(s) 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c426t-fdc8b98c3228a7ebdaefc410871f34e4e7b98e1c772d9777062c1869b9125f663</cites><orcidid>0009-0002-0432-4822</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>230,314,780,784,885,27924,27925</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/38748053$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Zuckerman, Ido</creatorcontrib><creatorcontrib>Werner, Nicole</creatorcontrib><creatorcontrib>Kouchly, Jonathan</creatorcontrib><creatorcontrib>Huston, Emma</creatorcontrib><creatorcontrib>DiMarco, Shannon</creatorcontrib><creatorcontrib>DiMusto, Paul</creatorcontrib><creatorcontrib>Laufer, Shlomi</creatorcontrib><title>Depth over RGB: automatic evaluation of open surgery skills using depth camera</title><title>International journal for computer assisted radiology and surgery</title><addtitle>Int J CARS</addtitle><addtitle>Int J Comput Assist Radiol Surg</addtitle><description>Purpose
In this paper, we present a novel approach to the automatic evaluation of open surgery skills using depth cameras. This work is intended to show that depth cameras achieve similar results to RGB cameras, which is the common method in the automatic evaluation of open surgery skills. Moreover, depth cameras offer advantages such as robustness to lighting variations, camera positioning, simplified data compression, and enhanced privacy, making them a promising alternative to RGB cameras.
Methods
Experts and novice surgeons completed two simulators of open suturing. We focused on hand and tool detection and action segmentation in suturing procedures. YOLOv8 was used for tool detection in RGB and depth videos. Furthermore, UVAST and MSTCN++ were used for action segmentation. Our study includes the collection and annotation of a dataset recorded with Azure Kinect.
Results
We demonstrated that using depth cameras in object detection and action segmentation achieves comparable results to RGB cameras. Furthermore, we analyzed 3D hand path length, revealing significant differences between experts and novice surgeons, emphasizing the potential of depth cameras in capturing surgical skills. We also investigated the influence of camera angles on measurement accuracy, highlighting the advantages of 3D cameras in providing a more accurate representation of hand movements.
Conclusion
Our research contributes to advancing the field of surgical skill assessment by leveraging depth cameras for more reliable and privacy evaluations. The findings suggest that depth cameras can be valuable in assessing surgical skills and provide a foundation for future research in this area.</description><subject>Annotations</subject><subject>Cameras</subject><subject>Clinical Competence</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Data compression</subject><subject>Evaluation</subject><subject>Hand tools</subject><subject>Health Informatics</subject><subject>Humans</subject><subject>Imaging</subject><subject>Imaging, Three-Dimensional - methods</subject><subject>Medicine</subject><subject>Medicine & Public Health</subject><subject>Object recognition</subject><subject>Original</subject><subject>Original Article</subject><subject>Pattern Recognition and Graphics</subject><subject>Privacy</subject><subject>Radiology</subject><subject>Segmentation</subject><subject>Simulators</subject><subject>Skills</subject><subject>Surgeons</subject><subject>Surgery</subject><subject>Suture Techniques - education</subject><subject>Suture Techniques - instrumentation</subject><subject>Sutures</subject><subject>Vision</subject><issn>1861-6429</issn><issn>1861-6410</issn><issn>1861-6429</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp9kUtP3DAUha0KVJ5_oIvKEptuUnz9iJ1uqnZ4SqhICNaWx7kZQpN4sJOR-Pc1DAXKoitf6Xz32MeHkE_AvgJj-jABKGkKxmXBBChTiA9kG0wJRSl5tfFm3iI7Kd0xJpUW6iPZEkZLw5TYJr-OcDne0rDCSK9Of36jbhpD78bWU1y5bspTGGhoaFjiQNMUFxgfaPrddl2iU2qHBa2fHLzrMbo9stm4LuH-87lLbk6Or2dnxcXl6fnsx0XhJS_Hoqm9mVfGC86N0zivHTZeAjMaGiFRos4qgtea15XWmpXc5zDVvAKumrIUu-T72nc5zXusPQ5jdJ1dxrZ38cEG19p_laG9tYuwsgBcsEpBdvjy7BDD_YRptH2bPHadGzBMyQqmlFSi5DyjB-_QuzDFIeezAlipQVTikeJryseQUsTm5TXA7GNfdt2XzX3Zp76syEuf3-Z4WflbUAbEGkhZGvLnv979H9s_8tWglg</recordid><startdate>2024</startdate><enddate>2024</enddate><creator>Zuckerman, Ido</creator><creator>Werner, Nicole</creator><creator>Kouchly, Jonathan</creator><creator>Huston, Emma</creator><creator>DiMarco, Shannon</creator><creator>DiMusto, Paul</creator><creator>Laufer, Shlomi</creator><general>Springer International Publishing</general><general>Springer Nature B.V</general><scope>C6C</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><scope>5PM</scope><orcidid>https://orcid.org/0009-0002-0432-4822</orcidid></search><sort><creationdate>2024</creationdate><title>Depth over RGB: automatic evaluation of open surgery skills using depth camera</title><author>Zuckerman, Ido ; Werner, Nicole ; Kouchly, Jonathan ; Huston, Emma ; DiMarco, Shannon ; DiMusto, Paul ; Laufer, Shlomi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c426t-fdc8b98c3228a7ebdaefc410871f34e4e7b98e1c772d9777062c1869b9125f663</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Annotations</topic><topic>Cameras</topic><topic>Clinical Competence</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Data compression</topic><topic>Evaluation</topic><topic>Hand tools</topic><topic>Health Informatics</topic><topic>Humans</topic><topic>Imaging</topic><topic>Imaging, Three-Dimensional - methods</topic><topic>Medicine</topic><topic>Medicine & Public Health</topic><topic>Object recognition</topic><topic>Original</topic><topic>Original Article</topic><topic>Pattern Recognition and Graphics</topic><topic>Privacy</topic><topic>Radiology</topic><topic>Segmentation</topic><topic>Simulators</topic><topic>Skills</topic><topic>Surgeons</topic><topic>Surgery</topic><topic>Suture Techniques - education</topic><topic>Suture Techniques - instrumentation</topic><topic>Sutures</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zuckerman, Ido</creatorcontrib><creatorcontrib>Werner, Nicole</creatorcontrib><creatorcontrib>Kouchly, Jonathan</creatorcontrib><creatorcontrib>Huston, Emma</creatorcontrib><creatorcontrib>DiMarco, Shannon</creatorcontrib><creatorcontrib>DiMusto, Paul</creatorcontrib><creatorcontrib>Laufer, Shlomi</creatorcontrib><collection>SpringerOpen</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>International journal for computer assisted radiology and surgery</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zuckerman, Ido</au><au>Werner, Nicole</au><au>Kouchly, Jonathan</au><au>Huston, Emma</au><au>DiMarco, Shannon</au><au>DiMusto, Paul</au><au>Laufer, Shlomi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Depth over RGB: automatic evaluation of open surgery skills using depth camera</atitle><jtitle>International journal for computer assisted radiology and surgery</jtitle><stitle>Int J CARS</stitle><addtitle>Int J Comput Assist Radiol Surg</addtitle><date>2024</date><risdate>2024</risdate><volume>19</volume><issue>7</issue><spage>1349</spage><epage>1357</epage><pages>1349-1357</pages><issn>1861-6429</issn><issn>1861-6410</issn><eissn>1861-6429</eissn><abstract>Purpose
In this paper, we present a novel approach to the automatic evaluation of open surgery skills using depth cameras. This work is intended to show that depth cameras achieve similar results to RGB cameras, which is the common method in the automatic evaluation of open surgery skills. Moreover, depth cameras offer advantages such as robustness to lighting variations, camera positioning, simplified data compression, and enhanced privacy, making them a promising alternative to RGB cameras.
Methods
Experts and novice surgeons completed two simulators of open suturing. We focused on hand and tool detection and action segmentation in suturing procedures. YOLOv8 was used for tool detection in RGB and depth videos. Furthermore, UVAST and MSTCN++ were used for action segmentation. Our study includes the collection and annotation of a dataset recorded with Azure Kinect.
Results
We demonstrated that using depth cameras in object detection and action segmentation achieves comparable results to RGB cameras. Furthermore, we analyzed 3D hand path length, revealing significant differences between experts and novice surgeons, emphasizing the potential of depth cameras in capturing surgical skills. We also investigated the influence of camera angles on measurement accuracy, highlighting the advantages of 3D cameras in providing a more accurate representation of hand movements.
Conclusion
Our research contributes to advancing the field of surgical skill assessment by leveraging depth cameras for more reliable and privacy evaluations. The findings suggest that depth cameras can be valuable in assessing surgical skills and provide a foundation for future research in this area.</abstract><cop>Cham</cop><pub>Springer International Publishing</pub><pmid>38748053</pmid><doi>10.1007/s11548-024-03158-3</doi><tpages>9</tpages><orcidid>https://orcid.org/0009-0002-0432-4822</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1861-6429 |
ispartof | International journal for computer assisted radiology and surgery, 2024, Vol.19 (7), p.1349-1357 |
issn | 1861-6429 1861-6410 1861-6429 |
language | eng |
recordid | cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_11230951 |
source | Springer Link |
subjects | Annotations Cameras Clinical Competence Computer Imaging Computer Science Data compression Evaluation Hand tools Health Informatics Humans Imaging Imaging, Three-Dimensional - methods Medicine Medicine & Public Health Object recognition Original Original Article Pattern Recognition and Graphics Privacy Radiology Segmentation Simulators Skills Surgeons Surgery Suture Techniques - education Suture Techniques - instrumentation Sutures Vision |
title | Depth over RGB: automatic evaluation of open surgery skills using depth camera |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T15%3A46%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Depth%20over%20RGB:%20automatic%20evaluation%20of%20open%20surgery%20skills%20using%20depth%20camera&rft.jtitle=International%20journal%20for%20computer%20assisted%20radiology%20and%20surgery&rft.au=Zuckerman,%20Ido&rft.date=2024&rft.volume=19&rft.issue=7&rft.spage=1349&rft.epage=1357&rft.pages=1349-1357&rft.issn=1861-6429&rft.eissn=1861-6429&rft_id=info:doi/10.1007/s11548-024-03158-3&rft_dat=%3Cproquest_pubme%3E3106713932%3C/proquest_pubme%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c426t-fdc8b98c3228a7ebdaefc410871f34e4e7b98e1c772d9777062c1869b9125f663%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3106713932&rft_id=info:pmid/38748053&rfr_iscdi=true |