Loading…

Deformation-Compensated Learning for Image Reconstruction Without Ground Truth

Deep neural networks for medical image reconstruction are traditionally trained using high-quality ground-truth images as training targets. Recent work on Noise2Noise (N2N) has shown the potential of using multiple noisy measurements of the same object as an alternative to having a ground-truth. How...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on medical imaging 2022-09, Vol.41 (9), p.2371-2384
Main Authors: Gan, Weijie, Sun, Yu, Eldeniz, Cihat, Liu, Jiaming, An, Hongyu, Kamilov, Ulugbek S.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c444t-93d17d6f6838a47fe9dcaa649b5b59773c997b22b644cbe6a6e56f892c771ef73
cites cdi_FETCH-LOGICAL-c444t-93d17d6f6838a47fe9dcaa649b5b59773c997b22b644cbe6a6e56f892c771ef73
container_end_page 2384
container_issue 9
container_start_page 2371
container_title IEEE transactions on medical imaging
container_volume 41
creator Gan, Weijie
Sun, Yu
Eldeniz, Cihat
Liu, Jiaming
An, Hongyu
Kamilov, Ulugbek S.
description Deep neural networks for medical image reconstruction are traditionally trained using high-quality ground-truth images as training targets. Recent work on Noise2Noise (N2N) has shown the potential of using multiple noisy measurements of the same object as an alternative to having a ground-truth. However, existing N2N-based methods are not suitable for learning from the measurements of an object undergoing nonrigid deformation. This paper addresses this issue by proposing the deformation-compensated learning (DeCoLearn) method for training deep reconstruction networks by compensating for object deformations. A key component of DeCoLearn is a deep registration module, which is jointly trained with the deep reconstruction network without any ground-truth supervision. We validate DeCoLearn on both simulated and experimentally collected magnetic resonance imaging (MRI) data and show that it significantly improves imaging quality.
doi_str_mv 10.1109/TMI.2022.3163018
format article
fullrecord <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_pubmed_primary_35344490</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9743932</ieee_id><sourcerecordid>2644946877</sourcerecordid><originalsourceid>FETCH-LOGICAL-c444t-93d17d6f6838a47fe9dcaa649b5b59773c997b22b644cbe6a6e56f892c771ef73</originalsourceid><addsrcrecordid>eNpdkd2LEzEUxYO4uLX6LggysC--TDdfk0xeBOm6a6GrIBV9C5nMnXaWTlKTjOB_b4Z2i-7TfTi_e889HITeELwgBKvrzf1qQTGlC0YEw6R-hmakquqSVvznczTDVNYlxoJeopcxPmBMeIXVC3TJKsY5V3iGvtxA58NgUu9dufTDAVw0CdpiDSa43m2LLBerwWyh-AbWu5jCaCe6-NGnnR9TcRf86NpiE8a0e4UuOrOP8Po05-j77afN8nO5_nq3Wn5clzb7plKxlshWdKJmteGyA9VaYwRXTdVUSkpmlZINpY3g3DYgjIBKdLWiVkoCnWRz9OF49zA2A7QWXApmrw-hH0z4o73p9f-K63d6639rxZXkOf4cvT8dCP7XCDHpoY8W9nvjwI9R0-ysuKjl5HX1BH3wY3A5nqYS14IzrESm8JGywccYoDs_Q7CeytK5LD2VpU9l5ZV3_4Y4Lzy2k4G3R6AHgLM8BVCMsr9UEpma</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2708643096</pqid></control><display><type>article</type><title>Deformation-Compensated Learning for Image Reconstruction Without Ground Truth</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Gan, Weijie ; Sun, Yu ; Eldeniz, Cihat ; Liu, Jiaming ; An, Hongyu ; Kamilov, Ulugbek S.</creator><creatorcontrib>Gan, Weijie ; Sun, Yu ; Eldeniz, Cihat ; Liu, Jiaming ; An, Hongyu ; Kamilov, Ulugbek S.</creatorcontrib><description>Deep neural networks for medical image reconstruction are traditionally trained using high-quality ground-truth images as training targets. Recent work on Noise2Noise (N2N) has shown the potential of using multiple noisy measurements of the same object as an alternative to having a ground-truth. However, existing N2N-based methods are not suitable for learning from the measurements of an object undergoing nonrigid deformation. This paper addresses this issue by proposing the deformation-compensated learning (DeCoLearn) method for training deep reconstruction networks by compensating for object deformations. A key component of DeCoLearn is a deep registration module, which is jointly trained with the deep reconstruction network without any ground-truth supervision. We validate DeCoLearn on both simulated and experimentally collected magnetic resonance imaging (MRI) data and show that it significantly improves imaging quality.</description><identifier>ISSN: 0278-0062</identifier><identifier>ISSN: 1558-254X</identifier><identifier>EISSN: 1558-254X</identifier><identifier>DOI: 10.1109/TMI.2022.3163018</identifier><identifier>PMID: 35344490</identifier><identifier>CODEN: ITMID4</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Artificial neural networks ; Convolutional neural networks ; Deep learning ; Image processing ; Image Processing, Computer-Assisted - methods ; Image quality ; Image reconstruction ; Imaging ; Inverse problems ; Learning ; Machine learning ; Magnetic Resonance Imaging ; magnetic resonance imaging (MRI) ; Medical imaging ; Neural networks ; Neural Networks, Computer ; Noise measurement ; Strain ; Training</subject><ispartof>IEEE transactions on medical imaging, 2022-09, Vol.41 (9), p.2371-2384</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c444t-93d17d6f6838a47fe9dcaa649b5b59773c997b22b644cbe6a6e56f892c771ef73</citedby><cites>FETCH-LOGICAL-c444t-93d17d6f6838a47fe9dcaa649b5b59773c997b22b644cbe6a6e56f892c771ef73</cites><orcidid>0000-0001-7225-9677 ; 0000-0002-1042-4443 ; 0000-0003-3604-784X ; 0000-0001-6459-2269 ; 0000-0002-4457-0916 ; 0000-0001-6770-3278</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9743932$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>230,314,780,784,885,27922,27923,54794</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/35344490$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Gan, Weijie</creatorcontrib><creatorcontrib>Sun, Yu</creatorcontrib><creatorcontrib>Eldeniz, Cihat</creatorcontrib><creatorcontrib>Liu, Jiaming</creatorcontrib><creatorcontrib>An, Hongyu</creatorcontrib><creatorcontrib>Kamilov, Ulugbek S.</creatorcontrib><title>Deformation-Compensated Learning for Image Reconstruction Without Ground Truth</title><title>IEEE transactions on medical imaging</title><addtitle>TMI</addtitle><addtitle>IEEE Trans Med Imaging</addtitle><description>Deep neural networks for medical image reconstruction are traditionally trained using high-quality ground-truth images as training targets. Recent work on Noise2Noise (N2N) has shown the potential of using multiple noisy measurements of the same object as an alternative to having a ground-truth. However, existing N2N-based methods are not suitable for learning from the measurements of an object undergoing nonrigid deformation. This paper addresses this issue by proposing the deformation-compensated learning (DeCoLearn) method for training deep reconstruction networks by compensating for object deformations. A key component of DeCoLearn is a deep registration module, which is jointly trained with the deep reconstruction network without any ground-truth supervision. We validate DeCoLearn on both simulated and experimentally collected magnetic resonance imaging (MRI) data and show that it significantly improves imaging quality.</description><subject>Artificial neural networks</subject><subject>Convolutional neural networks</subject><subject>Deep learning</subject><subject>Image processing</subject><subject>Image Processing, Computer-Assisted - methods</subject><subject>Image quality</subject><subject>Image reconstruction</subject><subject>Imaging</subject><subject>Inverse problems</subject><subject>Learning</subject><subject>Machine learning</subject><subject>Magnetic Resonance Imaging</subject><subject>magnetic resonance imaging (MRI)</subject><subject>Medical imaging</subject><subject>Neural networks</subject><subject>Neural Networks, Computer</subject><subject>Noise measurement</subject><subject>Strain</subject><subject>Training</subject><issn>0278-0062</issn><issn>1558-254X</issn><issn>1558-254X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNpdkd2LEzEUxYO4uLX6LggysC--TDdfk0xeBOm6a6GrIBV9C5nMnXaWTlKTjOB_b4Z2i-7TfTi_e889HITeELwgBKvrzf1qQTGlC0YEw6R-hmakquqSVvznczTDVNYlxoJeopcxPmBMeIXVC3TJKsY5V3iGvtxA58NgUu9dufTDAVw0CdpiDSa43m2LLBerwWyh-AbWu5jCaCe6-NGnnR9TcRf86NpiE8a0e4UuOrOP8Po05-j77afN8nO5_nq3Wn5clzb7plKxlshWdKJmteGyA9VaYwRXTdVUSkpmlZINpY3g3DYgjIBKdLWiVkoCnWRz9OF49zA2A7QWXApmrw-hH0z4o73p9f-K63d6639rxZXkOf4cvT8dCP7XCDHpoY8W9nvjwI9R0-ysuKjl5HX1BH3wY3A5nqYS14IzrESm8JGywccYoDs_Q7CeytK5LD2VpU9l5ZV3_4Y4Lzy2k4G3R6AHgLM8BVCMsr9UEpma</recordid><startdate>20220901</startdate><enddate>20220901</enddate><creator>Gan, Weijie</creator><creator>Sun, Yu</creator><creator>Eldeniz, Cihat</creator><creator>Liu, Jiaming</creator><creator>An, Hongyu</creator><creator>Kamilov, Ulugbek S.</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QQ</scope><scope>7SC</scope><scope>7SE</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JG9</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>NAPCQ</scope><scope>P64</scope><scope>7X8</scope><scope>5PM</scope><orcidid>https://orcid.org/0000-0001-7225-9677</orcidid><orcidid>https://orcid.org/0000-0002-1042-4443</orcidid><orcidid>https://orcid.org/0000-0003-3604-784X</orcidid><orcidid>https://orcid.org/0000-0001-6459-2269</orcidid><orcidid>https://orcid.org/0000-0002-4457-0916</orcidid><orcidid>https://orcid.org/0000-0001-6770-3278</orcidid></search><sort><creationdate>20220901</creationdate><title>Deformation-Compensated Learning for Image Reconstruction Without Ground Truth</title><author>Gan, Weijie ; Sun, Yu ; Eldeniz, Cihat ; Liu, Jiaming ; An, Hongyu ; Kamilov, Ulugbek S.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c444t-93d17d6f6838a47fe9dcaa649b5b59773c997b22b644cbe6a6e56f892c771ef73</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Artificial neural networks</topic><topic>Convolutional neural networks</topic><topic>Deep learning</topic><topic>Image processing</topic><topic>Image Processing, Computer-Assisted - methods</topic><topic>Image quality</topic><topic>Image reconstruction</topic><topic>Imaging</topic><topic>Inverse problems</topic><topic>Learning</topic><topic>Machine learning</topic><topic>Magnetic Resonance Imaging</topic><topic>magnetic resonance imaging (MRI)</topic><topic>Medical imaging</topic><topic>Neural networks</topic><topic>Neural Networks, Computer</topic><topic>Noise measurement</topic><topic>Strain</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Gan, Weijie</creatorcontrib><creatorcontrib>Sun, Yu</creatorcontrib><creatorcontrib>Eldeniz, Cihat</creatorcontrib><creatorcontrib>Liu, Jiaming</creatorcontrib><creatorcontrib>An, Hongyu</creatorcontrib><creatorcontrib>Kamilov, Ulugbek S.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Xplore</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Nursing &amp; Allied Health Premium</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>IEEE transactions on medical imaging</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Gan, Weijie</au><au>Sun, Yu</au><au>Eldeniz, Cihat</au><au>Liu, Jiaming</au><au>An, Hongyu</au><au>Kamilov, Ulugbek S.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deformation-Compensated Learning for Image Reconstruction Without Ground Truth</atitle><jtitle>IEEE transactions on medical imaging</jtitle><stitle>TMI</stitle><addtitle>IEEE Trans Med Imaging</addtitle><date>2022-09-01</date><risdate>2022</risdate><volume>41</volume><issue>9</issue><spage>2371</spage><epage>2384</epage><pages>2371-2384</pages><issn>0278-0062</issn><issn>1558-254X</issn><eissn>1558-254X</eissn><coden>ITMID4</coden><abstract>Deep neural networks for medical image reconstruction are traditionally trained using high-quality ground-truth images as training targets. Recent work on Noise2Noise (N2N) has shown the potential of using multiple noisy measurements of the same object as an alternative to having a ground-truth. However, existing N2N-based methods are not suitable for learning from the measurements of an object undergoing nonrigid deformation. This paper addresses this issue by proposing the deformation-compensated learning (DeCoLearn) method for training deep reconstruction networks by compensating for object deformations. A key component of DeCoLearn is a deep registration module, which is jointly trained with the deep reconstruction network without any ground-truth supervision. We validate DeCoLearn on both simulated and experimentally collected magnetic resonance imaging (MRI) data and show that it significantly improves imaging quality.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>35344490</pmid><doi>10.1109/TMI.2022.3163018</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0001-7225-9677</orcidid><orcidid>https://orcid.org/0000-0002-1042-4443</orcidid><orcidid>https://orcid.org/0000-0003-3604-784X</orcidid><orcidid>https://orcid.org/0000-0001-6459-2269</orcidid><orcidid>https://orcid.org/0000-0002-4457-0916</orcidid><orcidid>https://orcid.org/0000-0001-6770-3278</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0278-0062
ispartof IEEE transactions on medical imaging, 2022-09, Vol.41 (9), p.2371-2384
issn 0278-0062
1558-254X
1558-254X
language eng
recordid cdi_pubmed_primary_35344490
source IEEE Electronic Library (IEL) Journals
subjects Artificial neural networks
Convolutional neural networks
Deep learning
Image processing
Image Processing, Computer-Assisted - methods
Image quality
Image reconstruction
Imaging
Inverse problems
Learning
Machine learning
Magnetic Resonance Imaging
magnetic resonance imaging (MRI)
Medical imaging
Neural networks
Neural Networks, Computer
Noise measurement
Strain
Training
title Deformation-Compensated Learning for Image Reconstruction Without Ground Truth
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T01%3A34%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deformation-Compensated%20Learning%20for%20Image%20Reconstruction%20Without%20Ground%20Truth&rft.jtitle=IEEE%20transactions%20on%20medical%20imaging&rft.au=Gan,%20Weijie&rft.date=2022-09-01&rft.volume=41&rft.issue=9&rft.spage=2371&rft.epage=2384&rft.pages=2371-2384&rft.issn=0278-0062&rft.eissn=1558-254X&rft.coden=ITMID4&rft_id=info:doi/10.1109/TMI.2022.3163018&rft_dat=%3Cproquest_pubme%3E2644946877%3C/proquest_pubme%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c444t-93d17d6f6838a47fe9dcaa649b5b59773c997b22b644cbe6a6e56f892c771ef73%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2708643096&rft_id=info:pmid/35344490&rft_ieee_id=9743932&rfr_iscdi=true