Loading…

Deep attention super-resolution of brain magnetic resonance images acquired under clinical protocols

Vast quantities of Magnetic Resonance Images (MRI) are routinely acquired in clinical practice but, to speed up acquisition, these scans are typically of a quality that is sufficient for clinical diagnosis but sub-optimal for large-scale precision medicine, computational diagnostics, and large-scale...

Full description

Saved in:
Bibliographic Details
Published in:Frontiers in computational neuroscience 2022-08, Vol.16, p.887633-887633
Main Authors: Li, Bryan M., Castorina, Leonardo V., Valdés Hernández, Maria del C., Clancy, Una, Wiseman, Stewart J., Sakka, Eleni, Storkey, Amos J., Jaime Garcia, Daniela, Cheng, Yajun, Doubal, Fergus, Thrippleton, Michael T., Stringer, Michael, Wardlaw, Joanna M.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c470t-741e918cd6fbf4b3bdfa19dd80336e57e4d5da6fc19092076cfb978381d123013
cites cdi_FETCH-LOGICAL-c470t-741e918cd6fbf4b3bdfa19dd80336e57e4d5da6fc19092076cfb978381d123013
container_end_page 887633
container_issue
container_start_page 887633
container_title Frontiers in computational neuroscience
container_volume 16
creator Li, Bryan M.
Castorina, Leonardo V.
Valdés Hernández, Maria del C.
Clancy, Una
Wiseman, Stewart J.
Sakka, Eleni
Storkey, Amos J.
Jaime Garcia, Daniela
Cheng, Yajun
Doubal, Fergus
Thrippleton, Michael T.
Stringer, Michael
Wardlaw, Joanna M.
description Vast quantities of Magnetic Resonance Images (MRI) are routinely acquired in clinical practice but, to speed up acquisition, these scans are typically of a quality that is sufficient for clinical diagnosis but sub-optimal for large-scale precision medicine, computational diagnostics, and large-scale neuroimaging collaborative research. Here, we present a critic-guided framework to upsample low-resolution (often 2D) MRI full scans to help overcome these limitations. We incorporate feature-importance and self-attention methods into our model to improve the interpretability of this study. We evaluate our framework on paired low- and high-resolution brain MRI structural full scans (i.e., T1-, T2-weighted, and FLAIR sequences are simultaneously input) obtained in clinical and research settings from scanners manufactured by Siemens, Phillips, and GE. We show that the upsampled MRIs are qualitatively faithful to the ground-truth high-quality scans ( PSNR = 35.39; MAE = 3.78 E −3; NMSE = 4.32 E −10; SSIM = 0.9852; mean normal-appearing gray/white matter ratio intensity differences ranging from 0.0363 to 0.0784 for FLAIR, from 0.0010 to 0.0138 for T1-weighted and from 0.0156 to 0.074 for T2-weighted sequences). The automatic raw segmentation of tissues and lesions using the super-resolved images has fewer false positives and higher accuracy than those obtained from interpolated images in protocols represented with more than three sets in the training sample, making our approach a strong candidate for practical application in clinical and collaborative research.
doi_str_mv 10.3389/fncom.2022.887633
format article
fullrecord <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_c9dbe71357fc478d888bc969484762a0</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_c9dbe71357fc478d888bc969484762a0</doaj_id><sourcerecordid>2706445123</sourcerecordid><originalsourceid>FETCH-LOGICAL-c470t-741e918cd6fbf4b3bdfa19dd80336e57e4d5da6fc19092076cfb978381d123013</originalsourceid><addsrcrecordid>eNpdkktr3DAUhU1paNKkP6A7QTfdeKqX9dgUQvoKBLpJ1kKWrqYaPJIj2YH--2pmQmm6kjj38N0Hp-veE7xhTOlPIbm831BM6UYpKRh71V0QIWg_EKVe__M_797WusNYUDHgN905E1gzTtRF578AzMguC6Ql5oTqOkPpC9Q8rUchBzQWGxPa222CJTp0KCabHKDYNKjIusc1FvBoTR4KclNM0dkJzSUv2eWpXnVnwU4V3j2_l93Dt6_3Nz_6u5_fb2-u73rHJV56yQloopwXYQx8ZKMPlmjvFWZMwCCB-8FbERzRWFMshQujloop4gllmLDL7vbE9dnuzFzafOW3yTaao5DL1tjSVpjAOO1HkIQNMrTmyiulRqeF5opLQS1urM8n1ryOe_Cu3afY6QX0ZSXFX2abn4zmg2JENMDHZ0DJjyvUxexjdTBNNkFeq6GtO8Ny0LRZP_xn3eW1pHaq5sKC86Ht11zk5HIl11og_B2GYHPIgznmwRzyYE55YH8AYFOqBA</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2706445123</pqid></control><display><type>article</type><title>Deep attention super-resolution of brain magnetic resonance images acquired under clinical protocols</title><source>PubMed Central (PMC)</source><creator>Li, Bryan M. ; Castorina, Leonardo V. ; Valdés Hernández, Maria del C. ; Clancy, Una ; Wiseman, Stewart J. ; Sakka, Eleni ; Storkey, Amos J. ; Jaime Garcia, Daniela ; Cheng, Yajun ; Doubal, Fergus ; Thrippleton, Michael T. ; Stringer, Michael ; Wardlaw, Joanna M.</creator><creatorcontrib>Li, Bryan M. ; Castorina, Leonardo V. ; Valdés Hernández, Maria del C. ; Clancy, Una ; Wiseman, Stewart J. ; Sakka, Eleni ; Storkey, Amos J. ; Jaime Garcia, Daniela ; Cheng, Yajun ; Doubal, Fergus ; Thrippleton, Michael T. ; Stringer, Michael ; Wardlaw, Joanna M.</creatorcontrib><description>Vast quantities of Magnetic Resonance Images (MRI) are routinely acquired in clinical practice but, to speed up acquisition, these scans are typically of a quality that is sufficient for clinical diagnosis but sub-optimal for large-scale precision medicine, computational diagnostics, and large-scale neuroimaging collaborative research. Here, we present a critic-guided framework to upsample low-resolution (often 2D) MRI full scans to help overcome these limitations. We incorporate feature-importance and self-attention methods into our model to improve the interpretability of this study. We evaluate our framework on paired low- and high-resolution brain MRI structural full scans (i.e., T1-, T2-weighted, and FLAIR sequences are simultaneously input) obtained in clinical and research settings from scanners manufactured by Siemens, Phillips, and GE. We show that the upsampled MRIs are qualitatively faithful to the ground-truth high-quality scans ( PSNR = 35.39; MAE = 3.78 E −3; NMSE = 4.32 E −10; SSIM = 0.9852; mean normal-appearing gray/white matter ratio intensity differences ranging from 0.0363 to 0.0784 for FLAIR, from 0.0010 to 0.0138 for T1-weighted and from 0.0156 to 0.074 for T2-weighted sequences). The automatic raw segmentation of tissues and lesions using the super-resolved images has fewer false positives and higher accuracy than those obtained from interpolated images in protocols represented with more than three sets in the training sample, making our approach a strong candidate for practical application in clinical and collaborative research.</description><identifier>ISSN: 1662-5188</identifier><identifier>EISSN: 1662-5188</identifier><identifier>DOI: 10.3389/fncom.2022.887633</identifier><identifier>PMID: 36093418</identifier><language>eng</language><publisher>Lausanne: Frontiers Research Foundation</publisher><subject>Algorithms ; Attention ; Automation ; brain imaging ; Brain research ; Clinical medicine ; Computational neuroscience ; Datasets ; Deep learning ; explainable artificial intelligence ; image reconstruction ; Magnetic Resonance Imaging ; Methods ; Neural networks ; Neuroimaging ; Neuroscience ; Precision medicine ; Segmentation ; Substantia alba ; super-resolution</subject><ispartof>Frontiers in computational neuroscience, 2022-08, Vol.16, p.887633-887633</ispartof><rights>2022. This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>Copyright © 2022 Li, Castorina, Valdés Hernández, Clancy, Wiseman, Sakka, Storkey, Jaime Garcia, Cheng, Doubal, Thrippleton, Stringer and Wardlaw. 2022 Li, Castorina, Valdés Hernández, Clancy, Wiseman, Sakka, Storkey, Jaime Garcia, Cheng, Doubal, Thrippleton, Stringer and Wardlaw</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c470t-741e918cd6fbf4b3bdfa19dd80336e57e4d5da6fc19092076cfb978381d123013</citedby><cites>FETCH-LOGICAL-c470t-741e918cd6fbf4b3bdfa19dd80336e57e4d5da6fc19092076cfb978381d123013</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC9458316/pdf/$$EPDF$$P50$$Gpubmedcentral$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC9458316/$$EHTML$$P50$$Gpubmedcentral$$Hfree_for_read</linktohtml><link.rule.ids>230,314,723,776,780,881,27901,27902,53766,53768</link.rule.ids></links><search><creatorcontrib>Li, Bryan M.</creatorcontrib><creatorcontrib>Castorina, Leonardo V.</creatorcontrib><creatorcontrib>Valdés Hernández, Maria del C.</creatorcontrib><creatorcontrib>Clancy, Una</creatorcontrib><creatorcontrib>Wiseman, Stewart J.</creatorcontrib><creatorcontrib>Sakka, Eleni</creatorcontrib><creatorcontrib>Storkey, Amos J.</creatorcontrib><creatorcontrib>Jaime Garcia, Daniela</creatorcontrib><creatorcontrib>Cheng, Yajun</creatorcontrib><creatorcontrib>Doubal, Fergus</creatorcontrib><creatorcontrib>Thrippleton, Michael T.</creatorcontrib><creatorcontrib>Stringer, Michael</creatorcontrib><creatorcontrib>Wardlaw, Joanna M.</creatorcontrib><title>Deep attention super-resolution of brain magnetic resonance images acquired under clinical protocols</title><title>Frontiers in computational neuroscience</title><description>Vast quantities of Magnetic Resonance Images (MRI) are routinely acquired in clinical practice but, to speed up acquisition, these scans are typically of a quality that is sufficient for clinical diagnosis but sub-optimal for large-scale precision medicine, computational diagnostics, and large-scale neuroimaging collaborative research. Here, we present a critic-guided framework to upsample low-resolution (often 2D) MRI full scans to help overcome these limitations. We incorporate feature-importance and self-attention methods into our model to improve the interpretability of this study. We evaluate our framework on paired low- and high-resolution brain MRI structural full scans (i.e., T1-, T2-weighted, and FLAIR sequences are simultaneously input) obtained in clinical and research settings from scanners manufactured by Siemens, Phillips, and GE. We show that the upsampled MRIs are qualitatively faithful to the ground-truth high-quality scans ( PSNR = 35.39; MAE = 3.78 E −3; NMSE = 4.32 E −10; SSIM = 0.9852; mean normal-appearing gray/white matter ratio intensity differences ranging from 0.0363 to 0.0784 for FLAIR, from 0.0010 to 0.0138 for T1-weighted and from 0.0156 to 0.074 for T2-weighted sequences). The automatic raw segmentation of tissues and lesions using the super-resolved images has fewer false positives and higher accuracy than those obtained from interpolated images in protocols represented with more than three sets in the training sample, making our approach a strong candidate for practical application in clinical and collaborative research.</description><subject>Algorithms</subject><subject>Attention</subject><subject>Automation</subject><subject>brain imaging</subject><subject>Brain research</subject><subject>Clinical medicine</subject><subject>Computational neuroscience</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>explainable artificial intelligence</subject><subject>image reconstruction</subject><subject>Magnetic Resonance Imaging</subject><subject>Methods</subject><subject>Neural networks</subject><subject>Neuroimaging</subject><subject>Neuroscience</subject><subject>Precision medicine</subject><subject>Segmentation</subject><subject>Substantia alba</subject><subject>super-resolution</subject><issn>1662-5188</issn><issn>1662-5188</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNpdkktr3DAUhU1paNKkP6A7QTfdeKqX9dgUQvoKBLpJ1kKWrqYaPJIj2YH--2pmQmm6kjj38N0Hp-veE7xhTOlPIbm831BM6UYpKRh71V0QIWg_EKVe__M_797WusNYUDHgN905E1gzTtRF578AzMguC6Ql5oTqOkPpC9Q8rUchBzQWGxPa222CJTp0KCabHKDYNKjIusc1FvBoTR4KclNM0dkJzSUv2eWpXnVnwU4V3j2_l93Dt6_3Nz_6u5_fb2-u73rHJV56yQloopwXYQx8ZKMPlmjvFWZMwCCB-8FbERzRWFMshQujloop4gllmLDL7vbE9dnuzFzafOW3yTaao5DL1tjSVpjAOO1HkIQNMrTmyiulRqeF5opLQS1urM8n1ryOe_Cu3afY6QX0ZSXFX2abn4zmg2JENMDHZ0DJjyvUxexjdTBNNkFeq6GtO8Ny0LRZP_xn3eW1pHaq5sKC86Ht11zk5HIl11og_B2GYHPIgznmwRzyYE55YH8AYFOqBA</recordid><startdate>20220825</startdate><enddate>20220825</enddate><creator>Li, Bryan M.</creator><creator>Castorina, Leonardo V.</creator><creator>Valdés Hernández, Maria del C.</creator><creator>Clancy, Una</creator><creator>Wiseman, Stewart J.</creator><creator>Sakka, Eleni</creator><creator>Storkey, Amos J.</creator><creator>Jaime Garcia, Daniela</creator><creator>Cheng, Yajun</creator><creator>Doubal, Fergus</creator><creator>Thrippleton, Michael T.</creator><creator>Stringer, Michael</creator><creator>Wardlaw, Joanna M.</creator><general>Frontiers Research Foundation</general><general>Frontiers Media S.A</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7XB</scope><scope>88I</scope><scope>8FE</scope><scope>8FH</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BHPHI</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>LK8</scope><scope>M2P</scope><scope>M7P</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope></search><sort><creationdate>20220825</creationdate><title>Deep attention super-resolution of brain magnetic resonance images acquired under clinical protocols</title><author>Li, Bryan M. ; Castorina, Leonardo V. ; Valdés Hernández, Maria del C. ; Clancy, Una ; Wiseman, Stewart J. ; Sakka, Eleni ; Storkey, Amos J. ; Jaime Garcia, Daniela ; Cheng, Yajun ; Doubal, Fergus ; Thrippleton, Michael T. ; Stringer, Michael ; Wardlaw, Joanna M.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c470t-741e918cd6fbf4b3bdfa19dd80336e57e4d5da6fc19092076cfb978381d123013</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Attention</topic><topic>Automation</topic><topic>brain imaging</topic><topic>Brain research</topic><topic>Clinical medicine</topic><topic>Computational neuroscience</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>explainable artificial intelligence</topic><topic>image reconstruction</topic><topic>Magnetic Resonance Imaging</topic><topic>Methods</topic><topic>Neural networks</topic><topic>Neuroimaging</topic><topic>Neuroscience</topic><topic>Precision medicine</topic><topic>Segmentation</topic><topic>Substantia alba</topic><topic>super-resolution</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Li, Bryan M.</creatorcontrib><creatorcontrib>Castorina, Leonardo V.</creatorcontrib><creatorcontrib>Valdés Hernández, Maria del C.</creatorcontrib><creatorcontrib>Clancy, Una</creatorcontrib><creatorcontrib>Wiseman, Stewart J.</creatorcontrib><creatorcontrib>Sakka, Eleni</creatorcontrib><creatorcontrib>Storkey, Amos J.</creatorcontrib><creatorcontrib>Jaime Garcia, Daniela</creatorcontrib><creatorcontrib>Cheng, Yajun</creatorcontrib><creatorcontrib>Doubal, Fergus</creatorcontrib><creatorcontrib>Thrippleton, Michael T.</creatorcontrib><creatorcontrib>Stringer, Michael</creatorcontrib><creatorcontrib>Wardlaw, Joanna M.</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Science Database (Alumni Edition)</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>ProQuest Central</collection><collection>Natural Science Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>Biological Sciences</collection><collection>ProQuest Science Journals</collection><collection>Biological Science Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>Directory of Open Access Journals</collection><jtitle>Frontiers in computational neuroscience</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Li, Bryan M.</au><au>Castorina, Leonardo V.</au><au>Valdés Hernández, Maria del C.</au><au>Clancy, Una</au><au>Wiseman, Stewart J.</au><au>Sakka, Eleni</au><au>Storkey, Amos J.</au><au>Jaime Garcia, Daniela</au><au>Cheng, Yajun</au><au>Doubal, Fergus</au><au>Thrippleton, Michael T.</au><au>Stringer, Michael</au><au>Wardlaw, Joanna M.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep attention super-resolution of brain magnetic resonance images acquired under clinical protocols</atitle><jtitle>Frontiers in computational neuroscience</jtitle><date>2022-08-25</date><risdate>2022</risdate><volume>16</volume><spage>887633</spage><epage>887633</epage><pages>887633-887633</pages><issn>1662-5188</issn><eissn>1662-5188</eissn><abstract>Vast quantities of Magnetic Resonance Images (MRI) are routinely acquired in clinical practice but, to speed up acquisition, these scans are typically of a quality that is sufficient for clinical diagnosis but sub-optimal for large-scale precision medicine, computational diagnostics, and large-scale neuroimaging collaborative research. Here, we present a critic-guided framework to upsample low-resolution (often 2D) MRI full scans to help overcome these limitations. We incorporate feature-importance and self-attention methods into our model to improve the interpretability of this study. We evaluate our framework on paired low- and high-resolution brain MRI structural full scans (i.e., T1-, T2-weighted, and FLAIR sequences are simultaneously input) obtained in clinical and research settings from scanners manufactured by Siemens, Phillips, and GE. We show that the upsampled MRIs are qualitatively faithful to the ground-truth high-quality scans ( PSNR = 35.39; MAE = 3.78 E −3; NMSE = 4.32 E −10; SSIM = 0.9852; mean normal-appearing gray/white matter ratio intensity differences ranging from 0.0363 to 0.0784 for FLAIR, from 0.0010 to 0.0138 for T1-weighted and from 0.0156 to 0.074 for T2-weighted sequences). The automatic raw segmentation of tissues and lesions using the super-resolved images has fewer false positives and higher accuracy than those obtained from interpolated images in protocols represented with more than three sets in the training sample, making our approach a strong candidate for practical application in clinical and collaborative research.</abstract><cop>Lausanne</cop><pub>Frontiers Research Foundation</pub><pmid>36093418</pmid><doi>10.3389/fncom.2022.887633</doi><tpages>1</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1662-5188
ispartof Frontiers in computational neuroscience, 2022-08, Vol.16, p.887633-887633
issn 1662-5188
1662-5188
language eng
recordid cdi_doaj_primary_oai_doaj_org_article_c9dbe71357fc478d888bc969484762a0
source PubMed Central (PMC)
subjects Algorithms
Attention
Automation
brain imaging
Brain research
Clinical medicine
Computational neuroscience
Datasets
Deep learning
explainable artificial intelligence
image reconstruction
Magnetic Resonance Imaging
Methods
Neural networks
Neuroimaging
Neuroscience
Precision medicine
Segmentation
Substantia alba
super-resolution
title Deep attention super-resolution of brain magnetic resonance images acquired under clinical protocols
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-05T14%3A51%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20attention%20super-resolution%20of%20brain%20magnetic%20resonance%20images%20acquired%20under%20clinical%20protocols&rft.jtitle=Frontiers%20in%20computational%20neuroscience&rft.au=Li,%20Bryan%20M.&rft.date=2022-08-25&rft.volume=16&rft.spage=887633&rft.epage=887633&rft.pages=887633-887633&rft.issn=1662-5188&rft.eissn=1662-5188&rft_id=info:doi/10.3389/fncom.2022.887633&rft_dat=%3Cproquest_doaj_%3E2706445123%3C/proquest_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c470t-741e918cd6fbf4b3bdfa19dd80336e57e4d5da6fc19092076cfb978381d123013%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2706445123&rft_id=info:pmid/36093418&rfr_iscdi=true