Loading…
mustGAN: Multi-Stream Generative Adversarial Networks for MR Image Synthesis
Multi-contrast MRI protocols increase the level of morphological information available for diagnosis. Yet, the number and quality of contrasts is limited in practice by various factors including scan time and patient motion. Synthesis of missing or corrupted contrasts can alleviate this limitation t...
Saved in:
Published in: | arXiv.org 2019-09 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Yurt, Mahmut Salman Ul Hassan Dar Erdem, Aykut Erdem, Erkut Çukur, Tolga |
description | Multi-contrast MRI protocols increase the level of morphological information available for diagnosis. Yet, the number and quality of contrasts is limited in practice by various factors including scan time and patient motion. Synthesis of missing or corrupted contrasts can alleviate this limitation to improve clinical utility. Common approaches for multi-contrast MRI involve either one-to-one and many-to-one synthesis methods. One-to-one methods take as input a single source contrast, and they learn a latent representation sensitive to unique features of the source. Meanwhile, many-to-one methods receive multiple distinct sources, and they learn a shared latent representation more sensitive to common features across sources. For enhanced image synthesis, here we propose a multi-stream approach that aggregates information across multiple source images via a mixture of multiple one-to-one streams and a joint many-to-one stream. The shared feature maps generated in the many-to-one stream and the complementary feature maps generated in the one-to-one streams are combined with a fusion block. The location of the fusion block is adaptively modified to maximize task-specific performance. Qualitative and quantitative assessments on T1-, T2-, PD-weighted and FLAIR images clearly demonstrate the superior performance of the proposed method compared to previous state-of-the-art one-to-one and many-to-one methods. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2297569058</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2297569058</sourcerecordid><originalsourceid>FETCH-proquest_journals_22975690583</originalsourceid><addsrcrecordid>eNqNyr0KwjAUQOEgCBbtOwScCzE1_XErolXQDta9BLzV1LbR3KTi2-vgAzid4Tsj4vEwXATJkvMJ8REbxhiPYi5E6JFD59DmWbGiR9daFZTWgOxoDj0YadUANLsMYFAaJVtagH1pc0daa0OPJ7rv5BVo-e7tDVDhjIxr2SL4v07JfLs5r3fBw-inA7RVo53pv1RxnsYiSplIwv-uDxAkPSA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2297569058</pqid></control><display><type>article</type><title>mustGAN: Multi-Stream Generative Adversarial Networks for MR Image Synthesis</title><source>Publicly Available Content Database</source><creator>Yurt, Mahmut ; Salman Ul Hassan Dar ; Erdem, Aykut ; Erdem, Erkut ; Çukur, Tolga</creator><creatorcontrib>Yurt, Mahmut ; Salman Ul Hassan Dar ; Erdem, Aykut ; Erdem, Erkut ; Çukur, Tolga</creatorcontrib><description>Multi-contrast MRI protocols increase the level of morphological information available for diagnosis. Yet, the number and quality of contrasts is limited in practice by various factors including scan time and patient motion. Synthesis of missing or corrupted contrasts can alleviate this limitation to improve clinical utility. Common approaches for multi-contrast MRI involve either one-to-one and many-to-one synthesis methods. One-to-one methods take as input a single source contrast, and they learn a latent representation sensitive to unique features of the source. Meanwhile, many-to-one methods receive multiple distinct sources, and they learn a shared latent representation more sensitive to common features across sources. For enhanced image synthesis, here we propose a multi-stream approach that aggregates information across multiple source images via a mixture of multiple one-to-one streams and a joint many-to-one stream. The shared feature maps generated in the many-to-one stream and the complementary feature maps generated in the one-to-one streams are combined with a fusion block. The location of the fusion block is adaptively modified to maximize task-specific performance. Qualitative and quantitative assessments on T1-, T2-, PD-weighted and FLAIR images clearly demonstrate the superior performance of the proposed method compared to previous state-of-the-art one-to-one and many-to-one methods.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Feature maps ; Generative adversarial networks ; Image contrast ; Image enhancement ; Magnetic resonance imaging ; Medical imaging ; Representations ; Streams ; Synthesis</subject><ispartof>arXiv.org, 2019-09</ispartof><rights>2019. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2297569058?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>776,780,25731,36989,44566</link.rule.ids></links><search><creatorcontrib>Yurt, Mahmut</creatorcontrib><creatorcontrib>Salman Ul Hassan Dar</creatorcontrib><creatorcontrib>Erdem, Aykut</creatorcontrib><creatorcontrib>Erdem, Erkut</creatorcontrib><creatorcontrib>Çukur, Tolga</creatorcontrib><title>mustGAN: Multi-Stream Generative Adversarial Networks for MR Image Synthesis</title><title>arXiv.org</title><description>Multi-contrast MRI protocols increase the level of morphological information available for diagnosis. Yet, the number and quality of contrasts is limited in practice by various factors including scan time and patient motion. Synthesis of missing or corrupted contrasts can alleviate this limitation to improve clinical utility. Common approaches for multi-contrast MRI involve either one-to-one and many-to-one synthesis methods. One-to-one methods take as input a single source contrast, and they learn a latent representation sensitive to unique features of the source. Meanwhile, many-to-one methods receive multiple distinct sources, and they learn a shared latent representation more sensitive to common features across sources. For enhanced image synthesis, here we propose a multi-stream approach that aggregates information across multiple source images via a mixture of multiple one-to-one streams and a joint many-to-one stream. The shared feature maps generated in the many-to-one stream and the complementary feature maps generated in the one-to-one streams are combined with a fusion block. The location of the fusion block is adaptively modified to maximize task-specific performance. Qualitative and quantitative assessments on T1-, T2-, PD-weighted and FLAIR images clearly demonstrate the superior performance of the proposed method compared to previous state-of-the-art one-to-one and many-to-one methods.</description><subject>Feature maps</subject><subject>Generative adversarial networks</subject><subject>Image contrast</subject><subject>Image enhancement</subject><subject>Magnetic resonance imaging</subject><subject>Medical imaging</subject><subject>Representations</subject><subject>Streams</subject><subject>Synthesis</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNyr0KwjAUQOEgCBbtOwScCzE1_XErolXQDta9BLzV1LbR3KTi2-vgAzid4Tsj4vEwXATJkvMJ8REbxhiPYi5E6JFD59DmWbGiR9daFZTWgOxoDj0YadUANLsMYFAaJVtagH1pc0daa0OPJ7rv5BVo-e7tDVDhjIxr2SL4v07JfLs5r3fBw-inA7RVo53pv1RxnsYiSplIwv-uDxAkPSA</recordid><startdate>20190925</startdate><enddate>20190925</enddate><creator>Yurt, Mahmut</creator><creator>Salman Ul Hassan Dar</creator><creator>Erdem, Aykut</creator><creator>Erdem, Erkut</creator><creator>Çukur, Tolga</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20190925</creationdate><title>mustGAN: Multi-Stream Generative Adversarial Networks for MR Image Synthesis</title><author>Yurt, Mahmut ; Salman Ul Hassan Dar ; Erdem, Aykut ; Erdem, Erkut ; Çukur, Tolga</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_22975690583</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Feature maps</topic><topic>Generative adversarial networks</topic><topic>Image contrast</topic><topic>Image enhancement</topic><topic>Magnetic resonance imaging</topic><topic>Medical imaging</topic><topic>Representations</topic><topic>Streams</topic><topic>Synthesis</topic><toplevel>online_resources</toplevel><creatorcontrib>Yurt, Mahmut</creatorcontrib><creatorcontrib>Salman Ul Hassan Dar</creatorcontrib><creatorcontrib>Erdem, Aykut</creatorcontrib><creatorcontrib>Erdem, Erkut</creatorcontrib><creatorcontrib>Çukur, Tolga</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yurt, Mahmut</au><au>Salman Ul Hassan Dar</au><au>Erdem, Aykut</au><au>Erdem, Erkut</au><au>Çukur, Tolga</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>mustGAN: Multi-Stream Generative Adversarial Networks for MR Image Synthesis</atitle><jtitle>arXiv.org</jtitle><date>2019-09-25</date><risdate>2019</risdate><eissn>2331-8422</eissn><abstract>Multi-contrast MRI protocols increase the level of morphological information available for diagnosis. Yet, the number and quality of contrasts is limited in practice by various factors including scan time and patient motion. Synthesis of missing or corrupted contrasts can alleviate this limitation to improve clinical utility. Common approaches for multi-contrast MRI involve either one-to-one and many-to-one synthesis methods. One-to-one methods take as input a single source contrast, and they learn a latent representation sensitive to unique features of the source. Meanwhile, many-to-one methods receive multiple distinct sources, and they learn a shared latent representation more sensitive to common features across sources. For enhanced image synthesis, here we propose a multi-stream approach that aggregates information across multiple source images via a mixture of multiple one-to-one streams and a joint many-to-one stream. The shared feature maps generated in the many-to-one stream and the complementary feature maps generated in the one-to-one streams are combined with a fusion block. The location of the fusion block is adaptively modified to maximize task-specific performance. Qualitative and quantitative assessments on T1-, T2-, PD-weighted and FLAIR images clearly demonstrate the superior performance of the proposed method compared to previous state-of-the-art one-to-one and many-to-one methods.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2019-09 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2297569058 |
source | Publicly Available Content Database |
subjects | Feature maps Generative adversarial networks Image contrast Image enhancement Magnetic resonance imaging Medical imaging Representations Streams Synthesis |
title | mustGAN: Multi-Stream Generative Adversarial Networks for MR Image Synthesis |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-08T14%3A19%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=mustGAN:%20Multi-Stream%20Generative%20Adversarial%20Networks%20for%20MR%20Image%20Synthesis&rft.jtitle=arXiv.org&rft.au=Yurt,%20Mahmut&rft.date=2019-09-25&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2297569058%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_22975690583%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2297569058&rft_id=info:pmid/&rfr_iscdi=true |