Loading…

Learning Mri Contrast-Agnostic Registration

We introduce a strategy for learning image registration without acquired imaging data, producing powerful networks agnostic to magnetic resonance imaging (MRI) contrast. While classical methods accurately estimate the spatial correspondence between images, they solve an optimization problem for ever...

Full description

Saved in:
Bibliographic Details
Main Authors: Hoffmann, Malte, Billot, Benjamin, Iglesias, Juan E., Fischl, Bruce, Dalca, Adrian V.
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 903
container_issue
container_start_page 899
container_title
container_volume 2023
creator Hoffmann, Malte
Billot, Benjamin
Iglesias, Juan E.
Fischl, Bruce
Dalca, Adrian V.
description We introduce a strategy for learning image registration without acquired imaging data, producing powerful networks agnostic to magnetic resonance imaging (MRI) contrast. While classical methods accurately estimate the spatial correspondence between images, they solve an optimization problem for every new image pair. Learning methods are fast at test time but limited to images with contrasts and geometric content similar to those seen during training. We propose to remove this dependency using a generative strategy that exposes networks to a wide range of images synthesized from segmentations during training, forcing them to generalize across contrasts. We show that networks trained within this framework generalize to a broad array of unseen MRI contrasts and surpass classical state-of-the-art brain registration accuracy by up to 12.4 Dice points for a variety of tested contrast combinations. Critically, training on arbitrary shapes synthesized from noise distributions results in competitive performance, removing the dependency on acquired data of any kind. Additionally, since anatomical label maps are often available for the anatomy of interest, we show that synthesizing images from these dramatically boosts performance, while still avoiding the need for real intensity images during training.
doi_str_mv 10.1109/ISBI48211.2021.9434113
format conference_proceeding
fullrecord <record><control><sourceid>proquest_CHZPO</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_10782386</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9434113</ieee_id><sourcerecordid>2921115492</sourcerecordid><originalsourceid>FETCH-LOGICAL-i315t-cda9a225b69cb5c35244a668b6e84effe36691b943a90de76f84ad2dbf71c6563</originalsourceid><addsrcrecordid>eNpVUdtKw0AQXUWxtfYLhNJHQVIze8vuk9TgpVARvDyHTTKJK2lSs6ng37ulsei8zDBzOGfmDCETCGcAob5avNwsuKIAMxpSmGnOOAA7IKcgpeBAuYRDMgTNRaC4oEd9HWmqBmTs3EfoI-KchfyEDJhnYoLrIblcomlrW5fTx9ZO46buWuO6YF7WjetsNn3G0jrf62xTn5HjwlQOx30ekbe729f4IVg-3S_i-TKwDEQXZLnRhlKRSp2lImOCcm6kVKlExbEokEmpIfUnGB3mGMlCcZPTPC0iyKSQbESud7zrTbrCPMPtUlWybu3KtN9JY2zyf1Lb96RsvhIII0WZ2jJc9Axt87lB1yUr6zKsKlNjs3EJ1d5J8AZQD538Fdur_FrkAec7gEXE_bh_APsBi0d1wA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype><pqid>2921115492</pqid></control><display><type>conference_proceeding</type><title>Learning Mri Contrast-Agnostic Registration</title><source>IEEE Xplore All Conference Series</source><creator>Hoffmann, Malte ; Billot, Benjamin ; Iglesias, Juan E. ; Fischl, Bruce ; Dalca, Adrian V.</creator><creatorcontrib>Hoffmann, Malte ; Billot, Benjamin ; Iglesias, Juan E. ; Fischl, Bruce ; Dalca, Adrian V.</creatorcontrib><description>We introduce a strategy for learning image registration without acquired imaging data, producing powerful networks agnostic to magnetic resonance imaging (MRI) contrast. While classical methods accurately estimate the spatial correspondence between images, they solve an optimization problem for every new image pair. Learning methods are fast at test time but limited to images with contrasts and geometric content similar to those seen during training. We propose to remove this dependency using a generative strategy that exposes networks to a wide range of images synthesized from segmentations during training, forcing them to generalize across contrasts. We show that networks trained within this framework generalize to a broad array of unseen MRI contrasts and surpass classical state-of-the-art brain registration accuracy by up to 12.4 Dice points for a variety of tested contrast combinations. Critically, training on arbitrary shapes synthesized from noise distributions results in competitive performance, removing the dependency on acquired data of any kind. Additionally, since anatomical label maps are often available for the anatomy of interest, we show that synthesizing images from these dramatically boosts performance, while still avoiding the need for real intensity images during training.</description><identifier>ISSN: 1945-7928</identifier><identifier>EISSN: 1945-8452</identifier><identifier>EISBN: 1665412461</identifier><identifier>EISBN: 9781665412469</identifier><identifier>DOI: 10.1109/ISBI48211.2021.9434113</identifier><identifier>PMID: 38213549</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Arrays ; deep learning without data ; Deformable registration ; Image registration ; Image segmentation ; image synthesis ; Learning systems ; Magnetic resonance imaging ; MRI-contrast independence ; Shape ; Training</subject><ispartof>2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), 2021, Vol.2023, p.899-903</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9434113$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>230,309,310,314,776,780,785,786,881,23908,23909,25117,27900,27901,54529,54906</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9434113$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/38213549$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Hoffmann, Malte</creatorcontrib><creatorcontrib>Billot, Benjamin</creatorcontrib><creatorcontrib>Iglesias, Juan E.</creatorcontrib><creatorcontrib>Fischl, Bruce</creatorcontrib><creatorcontrib>Dalca, Adrian V.</creatorcontrib><title>Learning Mri Contrast-Agnostic Registration</title><title>2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)</title><addtitle>ISBI</addtitle><addtitle>Proc IEEE Int Symp Biomed Imaging</addtitle><description>We introduce a strategy for learning image registration without acquired imaging data, producing powerful networks agnostic to magnetic resonance imaging (MRI) contrast. While classical methods accurately estimate the spatial correspondence between images, they solve an optimization problem for every new image pair. Learning methods are fast at test time but limited to images with contrasts and geometric content similar to those seen during training. We propose to remove this dependency using a generative strategy that exposes networks to a wide range of images synthesized from segmentations during training, forcing them to generalize across contrasts. We show that networks trained within this framework generalize to a broad array of unseen MRI contrasts and surpass classical state-of-the-art brain registration accuracy by up to 12.4 Dice points for a variety of tested contrast combinations. Critically, training on arbitrary shapes synthesized from noise distributions results in competitive performance, removing the dependency on acquired data of any kind. Additionally, since anatomical label maps are often available for the anatomy of interest, we show that synthesizing images from these dramatically boosts performance, while still avoiding the need for real intensity images during training.</description><subject>Arrays</subject><subject>deep learning without data</subject><subject>Deformable registration</subject><subject>Image registration</subject><subject>Image segmentation</subject><subject>image synthesis</subject><subject>Learning systems</subject><subject>Magnetic resonance imaging</subject><subject>MRI-contrast independence</subject><subject>Shape</subject><subject>Training</subject><issn>1945-7928</issn><issn>1945-8452</issn><isbn>1665412461</isbn><isbn>9781665412469</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2021</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNpVUdtKw0AQXUWxtfYLhNJHQVIze8vuk9TgpVARvDyHTTKJK2lSs6ng37ulsei8zDBzOGfmDCETCGcAob5avNwsuKIAMxpSmGnOOAA7IKcgpeBAuYRDMgTNRaC4oEd9HWmqBmTs3EfoI-KchfyEDJhnYoLrIblcomlrW5fTx9ZO46buWuO6YF7WjetsNn3G0jrf62xTn5HjwlQOx30ekbe729f4IVg-3S_i-TKwDEQXZLnRhlKRSp2lImOCcm6kVKlExbEokEmpIfUnGB3mGMlCcZPTPC0iyKSQbESud7zrTbrCPMPtUlWybu3KtN9JY2zyf1Lb96RsvhIII0WZ2jJc9Axt87lB1yUr6zKsKlNjs3EJ1d5J8AZQD538Fdur_FrkAec7gEXE_bh_APsBi0d1wA</recordid><startdate>20210401</startdate><enddate>20210401</enddate><creator>Hoffmann, Malte</creator><creator>Billot, Benjamin</creator><creator>Iglesias, Juan E.</creator><creator>Fischl, Bruce</creator><creator>Dalca, Adrian V.</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope><scope>NPM</scope><scope>7X8</scope><scope>5PM</scope></search><sort><creationdate>20210401</creationdate><title>Learning Mri Contrast-Agnostic Registration</title><author>Hoffmann, Malte ; Billot, Benjamin ; Iglesias, Juan E. ; Fischl, Bruce ; Dalca, Adrian V.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i315t-cda9a225b69cb5c35244a668b6e84effe36691b943a90de76f84ad2dbf71c6563</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Arrays</topic><topic>deep learning without data</topic><topic>Deformable registration</topic><topic>Image registration</topic><topic>Image segmentation</topic><topic>image synthesis</topic><topic>Learning systems</topic><topic>Magnetic resonance imaging</topic><topic>MRI-contrast independence</topic><topic>Shape</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Hoffmann, Malte</creatorcontrib><creatorcontrib>Billot, Benjamin</creatorcontrib><creatorcontrib>Iglesias, Juan E.</creatorcontrib><creatorcontrib>Fischl, Bruce</creatorcontrib><creatorcontrib>Dalca, Adrian V.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Xplore</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection><collection>PubMed</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Hoffmann, Malte</au><au>Billot, Benjamin</au><au>Iglesias, Juan E.</au><au>Fischl, Bruce</au><au>Dalca, Adrian V.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Learning Mri Contrast-Agnostic Registration</atitle><btitle>2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)</btitle><stitle>ISBI</stitle><addtitle>Proc IEEE Int Symp Biomed Imaging</addtitle><date>2021-04-01</date><risdate>2021</risdate><volume>2023</volume><spage>899</spage><epage>903</epage><pages>899-903</pages><issn>1945-7928</issn><eissn>1945-8452</eissn><eisbn>1665412461</eisbn><eisbn>9781665412469</eisbn><abstract>We introduce a strategy for learning image registration without acquired imaging data, producing powerful networks agnostic to magnetic resonance imaging (MRI) contrast. While classical methods accurately estimate the spatial correspondence between images, they solve an optimization problem for every new image pair. Learning methods are fast at test time but limited to images with contrasts and geometric content similar to those seen during training. We propose to remove this dependency using a generative strategy that exposes networks to a wide range of images synthesized from segmentations during training, forcing them to generalize across contrasts. We show that networks trained within this framework generalize to a broad array of unseen MRI contrasts and surpass classical state-of-the-art brain registration accuracy by up to 12.4 Dice points for a variety of tested contrast combinations. Critically, training on arbitrary shapes synthesized from noise distributions results in competitive performance, removing the dependency on acquired data of any kind. Additionally, since anatomical label maps are often available for the anatomy of interest, we show that synthesizing images from these dramatically boosts performance, while still avoiding the need for real intensity images during training.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>38213549</pmid><doi>10.1109/ISBI48211.2021.9434113</doi><tpages>5</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1945-7928
ispartof 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), 2021, Vol.2023, p.899-903
issn 1945-7928
1945-8452
language eng
recordid cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_10782386
source IEEE Xplore All Conference Series
subjects Arrays
deep learning without data
Deformable registration
Image registration
Image segmentation
image synthesis
Learning systems
Magnetic resonance imaging
MRI-contrast independence
Shape
Training
title Learning Mri Contrast-Agnostic Registration
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-24T14%3A27%3A18IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Learning%20Mri%20Contrast-Agnostic%20Registration&rft.btitle=2021%20IEEE%2018th%20International%20Symposium%20on%20Biomedical%20Imaging%20(ISBI)&rft.au=Hoffmann,%20Malte&rft.date=2021-04-01&rft.volume=2023&rft.spage=899&rft.epage=903&rft.pages=899-903&rft.issn=1945-7928&rft.eissn=1945-8452&rft_id=info:doi/10.1109/ISBI48211.2021.9434113&rft.eisbn=1665412461&rft.eisbn_list=9781665412469&rft_dat=%3Cproquest_CHZPO%3E2921115492%3C/proquest_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i315t-cda9a225b69cb5c35244a668b6e84effe36691b943a90de76f84ad2dbf71c6563%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2921115492&rft_id=info:pmid/38213549&rft_ieee_id=9434113&rfr_iscdi=true