Loading…

Estimating 3-dimensional liver motion using deep learning and 2-dimensional ultrasound images

Purpose The main purpose of this study is to construct a system to track the tumor position during radiofrequency ablation (RFA) treatment. Existing tumor tracking systems are designed to track a tumor in a two-dimensional (2D) ultrasound (US) image. As a result, the three-dimensional (3D) motion of...

Full description

Saved in:
Bibliographic Details
Published in:International journal for computer assisted radiology and surgery 2020-12, Vol.15 (12), p.1989-1995
Main Authors: Yagasaki, Shiho, Koizumi, Norihiro, Nishiyama, Yu, Kondo, Ryosuke, Imaizumi, Tsubasa, Matsumoto, Naoki, Ogawa, Masahiro, Numata, Kazushi
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 1995
container_issue 12
container_start_page 1989
container_title International journal for computer assisted radiology and surgery
container_volume 15
creator Yagasaki, Shiho
Koizumi, Norihiro
Nishiyama, Yu
Kondo, Ryosuke
Imaizumi, Tsubasa
Matsumoto, Naoki
Ogawa, Masahiro
Numata, Kazushi
description Purpose The main purpose of this study is to construct a system to track the tumor position during radiofrequency ablation (RFA) treatment. Existing tumor tracking systems are designed to track a tumor in a two-dimensional (2D) ultrasound (US) image. As a result, the three-dimensional (3D) motion of the organs cannot be accommodated and the ablation area may be lost. In this study, we propose a method for estimating the 3D movement of the liver as a preliminary system for tumor tracking. Additionally, in current 3D movement estimation systems, the motion of different structures during RFA could reduce the tumor visibility in US images. Therefore, we also aim to improve the estimation of the 3D movement of the liver by improving the liver segmentation. We propose a novel approach to estimate the relative 6-axial movement ( x , y , z , roll, pitch, and yaw) between the liver and the US probe in order to estimate the overall movement of the liver. Method We used a convolutional neural network (CNN) to estimate the 3D displacement from two-dimensional US images. In addition, to improve the accuracy of the estimation, we introduced a segmentation map of the liver region as the input for the regression network. Specifically, we improved the extraction accuracy of the liver region by using a bi-directional convolutional LSTM U-Net with densely connected convolutions (BCDU-Net). Results By using BCDU-Net, the accuracy of the segmentation was dramatically improved, and as a result, the accuracy of the movement estimation was also improved. The mean absolute error for the out-of-plane direction was 0.0645 mm/frame. Conclusion The experimental results show the effectiveness of our novel method to identify the movement of the liver by BCDU-Net and CNN. Precise segmentation of the liver by BCDU-Net also contributes to enhancing the performance of the liver movement estimation.
doi_str_mv 10.1007/s11548-020-02265-1
format article
fullrecord <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_proquest_miscellaneous_2448412799</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2448412799</sourcerecordid><originalsourceid>FETCH-LOGICAL-p185t-d5f83dfcffdfa2fd903147fa559ef5b937b1eef3bad5c098f941c43e4189d9523</originalsourceid><addsrcrecordid>eNpVkE1LAzEQhoMotlb_gAfZo5fVTD66yVGKX1DwokcJ2c2kbNkvk13Bf29qq-BhyAzz5CV5CLkEegOUFrcRQAqVU0ZTsaXM4YjMQS0hXwqmj_96oDNyFuOWUiELLk_JjHNKtVZyTt7v41i3dqy7TcZzV7fYxbrvbJM19SeGrO3HNGZT3AEOccgatKHbTbZzGft3ZWrGYGM_pUXK3GA8JyfeNhEvDueCvD3cv66e8vXL4_Pqbp0PoOSYO-kVd77y3nnLvNOUgyi8lVKjl6XmRQmInpfWyYpq5bWASnAUoLTTkvEFud7nDqH_mDCOpq1jhU1jO-ynaJgQSgArtE7o1QGdyhadGUJ6avgyv0oSwPdATKtug8Fs-ymk70UD1OzEm714k8SbH_EG-DdHwHU5</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2448412799</pqid></control><display><type>article</type><title>Estimating 3-dimensional liver motion using deep learning and 2-dimensional ultrasound images</title><source>Springer Nature</source><creator>Yagasaki, Shiho ; Koizumi, Norihiro ; Nishiyama, Yu ; Kondo, Ryosuke ; Imaizumi, Tsubasa ; Matsumoto, Naoki ; Ogawa, Masahiro ; Numata, Kazushi</creator><creatorcontrib>Yagasaki, Shiho ; Koizumi, Norihiro ; Nishiyama, Yu ; Kondo, Ryosuke ; Imaizumi, Tsubasa ; Matsumoto, Naoki ; Ogawa, Masahiro ; Numata, Kazushi</creatorcontrib><description>Purpose The main purpose of this study is to construct a system to track the tumor position during radiofrequency ablation (RFA) treatment. Existing tumor tracking systems are designed to track a tumor in a two-dimensional (2D) ultrasound (US) image. As a result, the three-dimensional (3D) motion of the organs cannot be accommodated and the ablation area may be lost. In this study, we propose a method for estimating the 3D movement of the liver as a preliminary system for tumor tracking. Additionally, in current 3D movement estimation systems, the motion of different structures during RFA could reduce the tumor visibility in US images. Therefore, we also aim to improve the estimation of the 3D movement of the liver by improving the liver segmentation. We propose a novel approach to estimate the relative 6-axial movement ( x , y , z , roll, pitch, and yaw) between the liver and the US probe in order to estimate the overall movement of the liver. Method We used a convolutional neural network (CNN) to estimate the 3D displacement from two-dimensional US images. In addition, to improve the accuracy of the estimation, we introduced a segmentation map of the liver region as the input for the regression network. Specifically, we improved the extraction accuracy of the liver region by using a bi-directional convolutional LSTM U-Net with densely connected convolutions (BCDU-Net). Results By using BCDU-Net, the accuracy of the segmentation was dramatically improved, and as a result, the accuracy of the movement estimation was also improved. The mean absolute error for the out-of-plane direction was 0.0645 mm/frame. Conclusion The experimental results show the effectiveness of our novel method to identify the movement of the liver by BCDU-Net and CNN. Precise segmentation of the liver by BCDU-Net also contributes to enhancing the performance of the liver movement estimation.</description><identifier>ISSN: 1861-6410</identifier><identifier>EISSN: 1861-6429</identifier><identifier>DOI: 10.1007/s11548-020-02265-1</identifier><identifier>PMID: 33009985</identifier><language>eng</language><publisher>Cham: Springer International Publishing</publisher><subject>Computer Imaging ; Computer Science ; Deep Learning ; Health Informatics ; Humans ; Image Processing, Computer-Assisted - methods ; Imaging ; Liver - diagnostic imaging ; Liver - surgery ; Liver Neoplasms - diagnostic imaging ; Liver Neoplasms - surgery ; Medicine ; Medicine &amp; Public Health ; Neural Networks, Computer ; Organ Motion - physiology ; Pattern Recognition and Graphics ; Radiofrequency Ablation ; Radiology ; Short Communication ; Surgery ; Ultrasonography - methods ; Vision</subject><ispartof>International journal for computer assisted radiology and surgery, 2020-12, Vol.15 (12), p.1989-1995</ispartof><rights>CARS 2020</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>0000-0002-1111-9942</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27903,27904</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33009985$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Yagasaki, Shiho</creatorcontrib><creatorcontrib>Koizumi, Norihiro</creatorcontrib><creatorcontrib>Nishiyama, Yu</creatorcontrib><creatorcontrib>Kondo, Ryosuke</creatorcontrib><creatorcontrib>Imaizumi, Tsubasa</creatorcontrib><creatorcontrib>Matsumoto, Naoki</creatorcontrib><creatorcontrib>Ogawa, Masahiro</creatorcontrib><creatorcontrib>Numata, Kazushi</creatorcontrib><title>Estimating 3-dimensional liver motion using deep learning and 2-dimensional ultrasound images</title><title>International journal for computer assisted radiology and surgery</title><addtitle>Int J CARS</addtitle><addtitle>Int J Comput Assist Radiol Surg</addtitle><description>Purpose The main purpose of this study is to construct a system to track the tumor position during radiofrequency ablation (RFA) treatment. Existing tumor tracking systems are designed to track a tumor in a two-dimensional (2D) ultrasound (US) image. As a result, the three-dimensional (3D) motion of the organs cannot be accommodated and the ablation area may be lost. In this study, we propose a method for estimating the 3D movement of the liver as a preliminary system for tumor tracking. Additionally, in current 3D movement estimation systems, the motion of different structures during RFA could reduce the tumor visibility in US images. Therefore, we also aim to improve the estimation of the 3D movement of the liver by improving the liver segmentation. We propose a novel approach to estimate the relative 6-axial movement ( x , y , z , roll, pitch, and yaw) between the liver and the US probe in order to estimate the overall movement of the liver. Method We used a convolutional neural network (CNN) to estimate the 3D displacement from two-dimensional US images. In addition, to improve the accuracy of the estimation, we introduced a segmentation map of the liver region as the input for the regression network. Specifically, we improved the extraction accuracy of the liver region by using a bi-directional convolutional LSTM U-Net with densely connected convolutions (BCDU-Net). Results By using BCDU-Net, the accuracy of the segmentation was dramatically improved, and as a result, the accuracy of the movement estimation was also improved. The mean absolute error for the out-of-plane direction was 0.0645 mm/frame. Conclusion The experimental results show the effectiveness of our novel method to identify the movement of the liver by BCDU-Net and CNN. Precise segmentation of the liver by BCDU-Net also contributes to enhancing the performance of the liver movement estimation.</description><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Deep Learning</subject><subject>Health Informatics</subject><subject>Humans</subject><subject>Image Processing, Computer-Assisted - methods</subject><subject>Imaging</subject><subject>Liver - diagnostic imaging</subject><subject>Liver - surgery</subject><subject>Liver Neoplasms - diagnostic imaging</subject><subject>Liver Neoplasms - surgery</subject><subject>Medicine</subject><subject>Medicine &amp; Public Health</subject><subject>Neural Networks, Computer</subject><subject>Organ Motion - physiology</subject><subject>Pattern Recognition and Graphics</subject><subject>Radiofrequency Ablation</subject><subject>Radiology</subject><subject>Short Communication</subject><subject>Surgery</subject><subject>Ultrasonography - methods</subject><subject>Vision</subject><issn>1861-6410</issn><issn>1861-6429</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><recordid>eNpVkE1LAzEQhoMotlb_gAfZo5fVTD66yVGKX1DwokcJ2c2kbNkvk13Bf29qq-BhyAzz5CV5CLkEegOUFrcRQAqVU0ZTsaXM4YjMQS0hXwqmj_96oDNyFuOWUiELLk_JjHNKtVZyTt7v41i3dqy7TcZzV7fYxbrvbJM19SeGrO3HNGZT3AEOccgatKHbTbZzGft3ZWrGYGM_pUXK3GA8JyfeNhEvDueCvD3cv66e8vXL4_Pqbp0PoOSYO-kVd77y3nnLvNOUgyi8lVKjl6XmRQmInpfWyYpq5bWASnAUoLTTkvEFud7nDqH_mDCOpq1jhU1jO-ynaJgQSgArtE7o1QGdyhadGUJ6avgyv0oSwPdATKtug8Fs-ymk70UD1OzEm714k8SbH_EG-DdHwHU5</recordid><startdate>20201201</startdate><enddate>20201201</enddate><creator>Yagasaki, Shiho</creator><creator>Koizumi, Norihiro</creator><creator>Nishiyama, Yu</creator><creator>Kondo, Ryosuke</creator><creator>Imaizumi, Tsubasa</creator><creator>Matsumoto, Naoki</creator><creator>Ogawa, Masahiro</creator><creator>Numata, Kazushi</creator><general>Springer International Publishing</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-1111-9942</orcidid></search><sort><creationdate>20201201</creationdate><title>Estimating 3-dimensional liver motion using deep learning and 2-dimensional ultrasound images</title><author>Yagasaki, Shiho ; Koizumi, Norihiro ; Nishiyama, Yu ; Kondo, Ryosuke ; Imaizumi, Tsubasa ; Matsumoto, Naoki ; Ogawa, Masahiro ; Numata, Kazushi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-p185t-d5f83dfcffdfa2fd903147fa559ef5b937b1eef3bad5c098f941c43e4189d9523</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Deep Learning</topic><topic>Health Informatics</topic><topic>Humans</topic><topic>Image Processing, Computer-Assisted - methods</topic><topic>Imaging</topic><topic>Liver - diagnostic imaging</topic><topic>Liver - surgery</topic><topic>Liver Neoplasms - diagnostic imaging</topic><topic>Liver Neoplasms - surgery</topic><topic>Medicine</topic><topic>Medicine &amp; Public Health</topic><topic>Neural Networks, Computer</topic><topic>Organ Motion - physiology</topic><topic>Pattern Recognition and Graphics</topic><topic>Radiofrequency Ablation</topic><topic>Radiology</topic><topic>Short Communication</topic><topic>Surgery</topic><topic>Ultrasonography - methods</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yagasaki, Shiho</creatorcontrib><creatorcontrib>Koizumi, Norihiro</creatorcontrib><creatorcontrib>Nishiyama, Yu</creatorcontrib><creatorcontrib>Kondo, Ryosuke</creatorcontrib><creatorcontrib>Imaizumi, Tsubasa</creatorcontrib><creatorcontrib>Matsumoto, Naoki</creatorcontrib><creatorcontrib>Ogawa, Masahiro</creatorcontrib><creatorcontrib>Numata, Kazushi</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>MEDLINE - Academic</collection><jtitle>International journal for computer assisted radiology and surgery</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yagasaki, Shiho</au><au>Koizumi, Norihiro</au><au>Nishiyama, Yu</au><au>Kondo, Ryosuke</au><au>Imaizumi, Tsubasa</au><au>Matsumoto, Naoki</au><au>Ogawa, Masahiro</au><au>Numata, Kazushi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Estimating 3-dimensional liver motion using deep learning and 2-dimensional ultrasound images</atitle><jtitle>International journal for computer assisted radiology and surgery</jtitle><stitle>Int J CARS</stitle><addtitle>Int J Comput Assist Radiol Surg</addtitle><date>2020-12-01</date><risdate>2020</risdate><volume>15</volume><issue>12</issue><spage>1989</spage><epage>1995</epage><pages>1989-1995</pages><issn>1861-6410</issn><eissn>1861-6429</eissn><abstract>Purpose The main purpose of this study is to construct a system to track the tumor position during radiofrequency ablation (RFA) treatment. Existing tumor tracking systems are designed to track a tumor in a two-dimensional (2D) ultrasound (US) image. As a result, the three-dimensional (3D) motion of the organs cannot be accommodated and the ablation area may be lost. In this study, we propose a method for estimating the 3D movement of the liver as a preliminary system for tumor tracking. Additionally, in current 3D movement estimation systems, the motion of different structures during RFA could reduce the tumor visibility in US images. Therefore, we also aim to improve the estimation of the 3D movement of the liver by improving the liver segmentation. We propose a novel approach to estimate the relative 6-axial movement ( x , y , z , roll, pitch, and yaw) between the liver and the US probe in order to estimate the overall movement of the liver. Method We used a convolutional neural network (CNN) to estimate the 3D displacement from two-dimensional US images. In addition, to improve the accuracy of the estimation, we introduced a segmentation map of the liver region as the input for the regression network. Specifically, we improved the extraction accuracy of the liver region by using a bi-directional convolutional LSTM U-Net with densely connected convolutions (BCDU-Net). Results By using BCDU-Net, the accuracy of the segmentation was dramatically improved, and as a result, the accuracy of the movement estimation was also improved. The mean absolute error for the out-of-plane direction was 0.0645 mm/frame. Conclusion The experimental results show the effectiveness of our novel method to identify the movement of the liver by BCDU-Net and CNN. Precise segmentation of the liver by BCDU-Net also contributes to enhancing the performance of the liver movement estimation.</abstract><cop>Cham</cop><pub>Springer International Publishing</pub><pmid>33009985</pmid><doi>10.1007/s11548-020-02265-1</doi><tpages>7</tpages><orcidid>https://orcid.org/0000-0002-1111-9942</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1861-6410
ispartof International journal for computer assisted radiology and surgery, 2020-12, Vol.15 (12), p.1989-1995
issn 1861-6410
1861-6429
language eng
recordid cdi_proquest_miscellaneous_2448412799
source Springer Nature
subjects Computer Imaging
Computer Science
Deep Learning
Health Informatics
Humans
Image Processing, Computer-Assisted - methods
Imaging
Liver - diagnostic imaging
Liver - surgery
Liver Neoplasms - diagnostic imaging
Liver Neoplasms - surgery
Medicine
Medicine & Public Health
Neural Networks, Computer
Organ Motion - physiology
Pattern Recognition and Graphics
Radiofrequency Ablation
Radiology
Short Communication
Surgery
Ultrasonography - methods
Vision
title Estimating 3-dimensional liver motion using deep learning and 2-dimensional ultrasound images
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T23%3A03%3A33IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Estimating%203-dimensional%20liver%20motion%20using%20deep%20learning%20and%202-dimensional%20ultrasound%20images&rft.jtitle=International%20journal%20for%20computer%20assisted%20radiology%20and%20surgery&rft.au=Yagasaki,%20Shiho&rft.date=2020-12-01&rft.volume=15&rft.issue=12&rft.spage=1989&rft.epage=1995&rft.pages=1989-1995&rft.issn=1861-6410&rft.eissn=1861-6429&rft_id=info:doi/10.1007/s11548-020-02265-1&rft_dat=%3Cproquest_pubme%3E2448412799%3C/proquest_pubme%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-p185t-d5f83dfcffdfa2fd903147fa559ef5b937b1eef3bad5c098f941c43e4189d9523%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2448412799&rft_id=info:pmid/33009985&rfr_iscdi=true