Loading…
Incorporating Temporal Prior from Motion Flow for Instrument Segmentation in Minimally Invasive Surgery Video
Automatic instrument segmentation in video is an essentially fundamental yet challenging problem for robot-assisted minimally invasive surgery. In this paper, we propose a novel framework to leverage instrument motion information, by incorporating a derived temporal prior to an attention pyramid net...
Saved in:
Published in: | arXiv.org 2019-07 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Jin, Yueming Cheng, Keyun Dou, Qi Pheng-Ann Heng |
description | Automatic instrument segmentation in video is an essentially fundamental yet challenging problem for robot-assisted minimally invasive surgery. In this paper, we propose a novel framework to leverage instrument motion information, by incorporating a derived temporal prior to an attention pyramid network for accurate segmentation. Our inferred prior can provide reliable indication of the instrument location and shape, which is propagated from the previous frame to the current frame according to inter-frame motion flow. This prior is injected to the middle of an encoder-decoder segmentation network as an initialization of a pyramid of attention modules, to explicitly guide segmentation output from coarse to fine. In this way, the temporal dynamics and the attention network can effectively complement and benefit each other. As additional usage, our temporal prior enables semi-supervised learning with periodically unlabeled video frames, simply by reverse execution. We extensively validate our method on the public 2017 MICCAI EndoVis Robotic Instrument Segmentation Challenge dataset with three different tasks. Our method consistently exceeds the state-of-the-art results across all three tasks by a large margin. Our semi-supervised variant also demonstrates a promising potential for reducing annotation cost in the clinical practice. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2260229916</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2260229916</sourcerecordid><originalsourceid>FETCH-proquest_journals_22602299163</originalsourceid><addsrcrecordid>eNqNikEKwjAQRYMgWLR3GHAt1KmtuhbFLgRBcVuCpiWSZHSSKN7eKh7A1fv893oiwTyfThYzxIFIvb9mWYblHIsiT4St3Jn4RiyDdi0clf1sA3vWxNAwWdhR0ORgY-gJTXdWzgeOVrkAB9V-KL-BdrDTTltpzKuLHtLrh4JD5FbxC076omgk-o00XqU_DsV4sz6utpMb0z0qH-orRXadqhHLDHG5nJb5f9UbWa9Kbw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2260229916</pqid></control><display><type>article</type><title>Incorporating Temporal Prior from Motion Flow for Instrument Segmentation in Minimally Invasive Surgery Video</title><source>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</source><creator>Jin, Yueming ; Cheng, Keyun ; Dou, Qi ; Pheng-Ann Heng</creator><creatorcontrib>Jin, Yueming ; Cheng, Keyun ; Dou, Qi ; Pheng-Ann Heng</creatorcontrib><description>Automatic instrument segmentation in video is an essentially fundamental yet challenging problem for robot-assisted minimally invasive surgery. In this paper, we propose a novel framework to leverage instrument motion information, by incorporating a derived temporal prior to an attention pyramid network for accurate segmentation. Our inferred prior can provide reliable indication of the instrument location and shape, which is propagated from the previous frame to the current frame according to inter-frame motion flow. This prior is injected to the middle of an encoder-decoder segmentation network as an initialization of a pyramid of attention modules, to explicitly guide segmentation output from coarse to fine. In this way, the temporal dynamics and the attention network can effectively complement and benefit each other. As additional usage, our temporal prior enables semi-supervised learning with periodically unlabeled video frames, simply by reverse execution. We extensively validate our method on the public 2017 MICCAI EndoVis Robotic Instrument Segmentation Challenge dataset with three different tasks. Our method consistently exceeds the state-of-the-art results across all three tasks by a large margin. Our semi-supervised variant also demonstrates a promising potential for reducing annotation cost in the clinical practice.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Annotations ; Coders ; Encoders-Decoders ; Laparoscopy ; Robotic surgery ; Segmentation</subject><ispartof>arXiv.org, 2019-07</ispartof><rights>2019. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2260229916?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25751,37010,44588</link.rule.ids></links><search><creatorcontrib>Jin, Yueming</creatorcontrib><creatorcontrib>Cheng, Keyun</creatorcontrib><creatorcontrib>Dou, Qi</creatorcontrib><creatorcontrib>Pheng-Ann Heng</creatorcontrib><title>Incorporating Temporal Prior from Motion Flow for Instrument Segmentation in Minimally Invasive Surgery Video</title><title>arXiv.org</title><description>Automatic instrument segmentation in video is an essentially fundamental yet challenging problem for robot-assisted minimally invasive surgery. In this paper, we propose a novel framework to leverage instrument motion information, by incorporating a derived temporal prior to an attention pyramid network for accurate segmentation. Our inferred prior can provide reliable indication of the instrument location and shape, which is propagated from the previous frame to the current frame according to inter-frame motion flow. This prior is injected to the middle of an encoder-decoder segmentation network as an initialization of a pyramid of attention modules, to explicitly guide segmentation output from coarse to fine. In this way, the temporal dynamics and the attention network can effectively complement and benefit each other. As additional usage, our temporal prior enables semi-supervised learning with periodically unlabeled video frames, simply by reverse execution. We extensively validate our method on the public 2017 MICCAI EndoVis Robotic Instrument Segmentation Challenge dataset with three different tasks. Our method consistently exceeds the state-of-the-art results across all three tasks by a large margin. Our semi-supervised variant also demonstrates a promising potential for reducing annotation cost in the clinical practice.</description><subject>Annotations</subject><subject>Coders</subject><subject>Encoders-Decoders</subject><subject>Laparoscopy</subject><subject>Robotic surgery</subject><subject>Segmentation</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNikEKwjAQRYMgWLR3GHAt1KmtuhbFLgRBcVuCpiWSZHSSKN7eKh7A1fv893oiwTyfThYzxIFIvb9mWYblHIsiT4St3Jn4RiyDdi0clf1sA3vWxNAwWdhR0ORgY-gJTXdWzgeOVrkAB9V-KL-BdrDTTltpzKuLHtLrh4JD5FbxC076omgk-o00XqU_DsV4sz6utpMb0z0qH-orRXadqhHLDHG5nJb5f9UbWa9Kbw</recordid><startdate>20190718</startdate><enddate>20190718</enddate><creator>Jin, Yueming</creator><creator>Cheng, Keyun</creator><creator>Dou, Qi</creator><creator>Pheng-Ann Heng</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20190718</creationdate><title>Incorporating Temporal Prior from Motion Flow for Instrument Segmentation in Minimally Invasive Surgery Video</title><author>Jin, Yueming ; Cheng, Keyun ; Dou, Qi ; Pheng-Ann Heng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_22602299163</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Annotations</topic><topic>Coders</topic><topic>Encoders-Decoders</topic><topic>Laparoscopy</topic><topic>Robotic surgery</topic><topic>Segmentation</topic><toplevel>online_resources</toplevel><creatorcontrib>Jin, Yueming</creatorcontrib><creatorcontrib>Cheng, Keyun</creatorcontrib><creatorcontrib>Dou, Qi</creatorcontrib><creatorcontrib>Pheng-Ann Heng</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Jin, Yueming</au><au>Cheng, Keyun</au><au>Dou, Qi</au><au>Pheng-Ann Heng</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Incorporating Temporal Prior from Motion Flow for Instrument Segmentation in Minimally Invasive Surgery Video</atitle><jtitle>arXiv.org</jtitle><date>2019-07-18</date><risdate>2019</risdate><eissn>2331-8422</eissn><abstract>Automatic instrument segmentation in video is an essentially fundamental yet challenging problem for robot-assisted minimally invasive surgery. In this paper, we propose a novel framework to leverage instrument motion information, by incorporating a derived temporal prior to an attention pyramid network for accurate segmentation. Our inferred prior can provide reliable indication of the instrument location and shape, which is propagated from the previous frame to the current frame according to inter-frame motion flow. This prior is injected to the middle of an encoder-decoder segmentation network as an initialization of a pyramid of attention modules, to explicitly guide segmentation output from coarse to fine. In this way, the temporal dynamics and the attention network can effectively complement and benefit each other. As additional usage, our temporal prior enables semi-supervised learning with periodically unlabeled video frames, simply by reverse execution. We extensively validate our method on the public 2017 MICCAI EndoVis Robotic Instrument Segmentation Challenge dataset with three different tasks. Our method consistently exceeds the state-of-the-art results across all three tasks by a large margin. Our semi-supervised variant also demonstrates a promising potential for reducing annotation cost in the clinical practice.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2019-07 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2260229916 |
source | Publicly Available Content Database (Proquest) (PQ_SDU_P3) |
subjects | Annotations Coders Encoders-Decoders Laparoscopy Robotic surgery Segmentation |
title | Incorporating Temporal Prior from Motion Flow for Instrument Segmentation in Minimally Invasive Surgery Video |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T07%3A41%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Incorporating%20Temporal%20Prior%20from%20Motion%20Flow%20for%20Instrument%20Segmentation%20in%20Minimally%20Invasive%20Surgery%20Video&rft.jtitle=arXiv.org&rft.au=Jin,%20Yueming&rft.date=2019-07-18&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2260229916%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_22602299163%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2260229916&rft_id=info:pmid/&rfr_iscdi=true |