Loading…

Indirect deformable image registration using synthetic image generated by unsupervised deep learning

3D image registration is now common in many medical domains. Multimodal registration implies the use of different imaging modalities, which results in lower accuracy compared to monomodal registration. The aim of this study was to propose a novel approach for deformable image registration (DIR) that...

Full description

Saved in:
Bibliographic Details
Published in:Image and vision computing 2024-08, Vol.148, p.105143, Article 105143
Main Authors: Hémon, Cédric, Texier, Blanche, Chourak, Hilda, Simon, Antoine, Bessières, Igor, de Crevoisier, Renaud, Castelli, Joël, Lafond, Caroline, Barateau, Anaïs, Nunes, Jean-Claude
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c219t-27a4441e070b0d3a98511bf1304a75fd8941eeeb8db44987c0cc3795ad09377d3
container_end_page
container_issue
container_start_page 105143
container_title Image and vision computing
container_volume 148
creator Hémon, Cédric
Texier, Blanche
Chourak, Hilda
Simon, Antoine
Bessières, Igor
de Crevoisier, Renaud
Castelli, Joël
Lafond, Caroline
Barateau, Anaïs
Nunes, Jean-Claude
description 3D image registration is now common in many medical domains. Multimodal registration implies the use of different imaging modalities, which results in lower accuracy compared to monomodal registration. The aim of this study was to propose a novel approach for deformable image registration (DIR) that incorporates an unsupervised deep learning (DL)-based generation step. The objective was to reduce the challenge of multimodal registration to monomodal registration. Two datasets from prostate radiotherapy patients were used to evaluate the proposed method. The first dataset consisted of Computed Tomography (CT)/ Cone Beam Computed Tomography (CBCT) pairs from 23 patients using different CBCT devices. The second dataset included Magnetic Resonance Imaging (MRI)/CT pairs from two different care centers, utilizing different MRI devices (0.35 T MRIdian MR-Linac, 1.5 T GE lightspeed MRI). Following a preprocessing step essential for ensuring DL synthesis accuracy and standardizing the database, synthetic CTs (sCTreg) were generated using an unsupervised conditional Generative Adversarial Network (cGAN). The generated sCTs from CBCT or MRI were then utilized for deformable registration with CT scans. This registration method was compared to three standard methods: rigid registration, Elastix registration based on BSplines, and VoxelMorph-based registration (applied exclusively to CBCT/CT). The endpoints of comparison were the dice coefficients calculated between delineated structures for both datasets. For both datasets, intermediary sCT generation provided the highest dice coefficients. Dices reached 0.85, 0.85 and 0.75 for the prostate, bladder and rectum for the dataset 1 and 0.90, 0.95 and 0.87 respectively for the dataset 2. When the sCT were not used, dices reached 0.66, 0.78, 0.66 for the dataset 1 and 0.93, 0.87 and 0.84 for the dataset 2. Furthermore, the evaluation of the impact of registration on sCT generation showed that lower Mean Absolute Errors were obtained when the registration was conducted with a sCT. Using unsupervised deep learning to synthesize intermediate sCT has led to improved registration accuracy in radiotherapy applications employing two distinct imaging modalities. [Display omitted] •We translated a multimodal CBCT-MR/CT registration into a sCT/CT registration.•The unsupervised synthesis method was based on a cGAN using novel perceptual loss.•The best registration accuracy was obtained via synthetic image generation step.•The content
doi_str_mv 10.1016/j.imavis.2024.105143
format article
fullrecord <record><control><sourceid>elsevier_hal_p</sourceid><recordid>TN_cdi_hal_primary_oai_HAL_hal_04651011v1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0262885624002476</els_id><sourcerecordid>S0262885624002476</sourcerecordid><originalsourceid>FETCH-LOGICAL-c219t-27a4441e070b0d3a98511bf1304a75fd8941eeeb8db44987c0cc3795ad09377d3</originalsourceid><addsrcrecordid>eNp9kMFPwyAYxYnRxDn9Dzxw9dAJhRZ6MVkWdUuWeNEzofC1Y-noAt2S_veydPHoCfh47-V9P4SeKVlQQsvX_cId9NnFRU5ynkYF5ewGzagUeSYpk7doRvIy3WVR3qOHGPeEEEFENUN2460LYAZsoenDQdcd4JTWAg7QujgEPbje41N0vsVx9MMOBmeukhY8JAFYXI_45OPpCCH1SG8LcMQd6OCT7xHdNbqL8HQ95-jn4_17tc62X5-b1XKbmZxWQ5YLzTmnkKrVxDJdyYLSuqGMcC2KxsoqfQLU0tacV1IYYgwTVaEtqZgQls3Ry5S70506htQxjKrXTq2XW3WZEV4WiRg906Tlk9aEPsYAzZ-BEnWhqvZqoqouVNVENdneJhukPc4OgorGgTcwUVS2d_8H_AKaQIN9</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Indirect deformable image registration using synthetic image generated by unsupervised deep learning</title><source>ScienceDirect Freedom Collection</source><creator>Hémon, Cédric ; Texier, Blanche ; Chourak, Hilda ; Simon, Antoine ; Bessières, Igor ; de Crevoisier, Renaud ; Castelli, Joël ; Lafond, Caroline ; Barateau, Anaïs ; Nunes, Jean-Claude</creator><creatorcontrib>Hémon, Cédric ; Texier, Blanche ; Chourak, Hilda ; Simon, Antoine ; Bessières, Igor ; de Crevoisier, Renaud ; Castelli, Joël ; Lafond, Caroline ; Barateau, Anaïs ; Nunes, Jean-Claude</creatorcontrib><description>3D image registration is now common in many medical domains. Multimodal registration implies the use of different imaging modalities, which results in lower accuracy compared to monomodal registration. The aim of this study was to propose a novel approach for deformable image registration (DIR) that incorporates an unsupervised deep learning (DL)-based generation step. The objective was to reduce the challenge of multimodal registration to monomodal registration. Two datasets from prostate radiotherapy patients were used to evaluate the proposed method. The first dataset consisted of Computed Tomography (CT)/ Cone Beam Computed Tomography (CBCT) pairs from 23 patients using different CBCT devices. The second dataset included Magnetic Resonance Imaging (MRI)/CT pairs from two different care centers, utilizing different MRI devices (0.35 T MRIdian MR-Linac, 1.5 T GE lightspeed MRI). Following a preprocessing step essential for ensuring DL synthesis accuracy and standardizing the database, synthetic CTs (sCTreg) were generated using an unsupervised conditional Generative Adversarial Network (cGAN). The generated sCTs from CBCT or MRI were then utilized for deformable registration with CT scans. This registration method was compared to three standard methods: rigid registration, Elastix registration based on BSplines, and VoxelMorph-based registration (applied exclusively to CBCT/CT). The endpoints of comparison were the dice coefficients calculated between delineated structures for both datasets. For both datasets, intermediary sCT generation provided the highest dice coefficients. Dices reached 0.85, 0.85 and 0.75 for the prostate, bladder and rectum for the dataset 1 and 0.90, 0.95 and 0.87 respectively for the dataset 2. When the sCT were not used, dices reached 0.66, 0.78, 0.66 for the dataset 1 and 0.93, 0.87 and 0.84 for the dataset 2. Furthermore, the evaluation of the impact of registration on sCT generation showed that lower Mean Absolute Errors were obtained when the registration was conducted with a sCT. Using unsupervised deep learning to synthesize intermediate sCT has led to improved registration accuracy in radiotherapy applications employing two distinct imaging modalities. [Display omitted] •We translated a multimodal CBCT-MR/CT registration into a sCT/CT registration.•The unsupervised synthesis method was based on a cGAN using novel perceptual loss.•The best registration accuracy was obtained via synthetic image generation step.•The content loss improved conventional registration based on mutual information.</description><identifier>ISSN: 0262-8856</identifier><identifier>EISSN: 1872-8138</identifier><identifier>DOI: 10.1016/j.imavis.2024.105143</identifier><language>eng</language><publisher>Elsevier B.V</publisher><subject>Bioengineering ; CBCT ; Life Sciences ; MRI ; Multimodal image registration ; Radiotherapy ; Synthetic-CT ; Unsupervised generation</subject><ispartof>Image and vision computing, 2024-08, Vol.148, p.105143, Article 105143</ispartof><rights>2024 The Authors</rights><rights>Attribution</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c219t-27a4441e070b0d3a98511bf1304a75fd8941eeeb8db44987c0cc3795ad09377d3</cites><orcidid>0000-0001-6128-1836 ; 0000-0001-6560-1518 ; 0000-0002-7186-5568 ; 0000-0002-6095-4397</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>230,314,780,784,885,27924,27925</link.rule.ids><backlink>$$Uhttps://hal.science/hal-04651011$$DView record in HAL$$Hfree_for_read</backlink></links><search><creatorcontrib>Hémon, Cédric</creatorcontrib><creatorcontrib>Texier, Blanche</creatorcontrib><creatorcontrib>Chourak, Hilda</creatorcontrib><creatorcontrib>Simon, Antoine</creatorcontrib><creatorcontrib>Bessières, Igor</creatorcontrib><creatorcontrib>de Crevoisier, Renaud</creatorcontrib><creatorcontrib>Castelli, Joël</creatorcontrib><creatorcontrib>Lafond, Caroline</creatorcontrib><creatorcontrib>Barateau, Anaïs</creatorcontrib><creatorcontrib>Nunes, Jean-Claude</creatorcontrib><title>Indirect deformable image registration using synthetic image generated by unsupervised deep learning</title><title>Image and vision computing</title><description>3D image registration is now common in many medical domains. Multimodal registration implies the use of different imaging modalities, which results in lower accuracy compared to monomodal registration. The aim of this study was to propose a novel approach for deformable image registration (DIR) that incorporates an unsupervised deep learning (DL)-based generation step. The objective was to reduce the challenge of multimodal registration to monomodal registration. Two datasets from prostate radiotherapy patients were used to evaluate the proposed method. The first dataset consisted of Computed Tomography (CT)/ Cone Beam Computed Tomography (CBCT) pairs from 23 patients using different CBCT devices. The second dataset included Magnetic Resonance Imaging (MRI)/CT pairs from two different care centers, utilizing different MRI devices (0.35 T MRIdian MR-Linac, 1.5 T GE lightspeed MRI). Following a preprocessing step essential for ensuring DL synthesis accuracy and standardizing the database, synthetic CTs (sCTreg) were generated using an unsupervised conditional Generative Adversarial Network (cGAN). The generated sCTs from CBCT or MRI were then utilized for deformable registration with CT scans. This registration method was compared to three standard methods: rigid registration, Elastix registration based on BSplines, and VoxelMorph-based registration (applied exclusively to CBCT/CT). The endpoints of comparison were the dice coefficients calculated between delineated structures for both datasets. For both datasets, intermediary sCT generation provided the highest dice coefficients. Dices reached 0.85, 0.85 and 0.75 for the prostate, bladder and rectum for the dataset 1 and 0.90, 0.95 and 0.87 respectively for the dataset 2. When the sCT were not used, dices reached 0.66, 0.78, 0.66 for the dataset 1 and 0.93, 0.87 and 0.84 for the dataset 2. Furthermore, the evaluation of the impact of registration on sCT generation showed that lower Mean Absolute Errors were obtained when the registration was conducted with a sCT. Using unsupervised deep learning to synthesize intermediate sCT has led to improved registration accuracy in radiotherapy applications employing two distinct imaging modalities. [Display omitted] •We translated a multimodal CBCT-MR/CT registration into a sCT/CT registration.•The unsupervised synthesis method was based on a cGAN using novel perceptual loss.•The best registration accuracy was obtained via synthetic image generation step.•The content loss improved conventional registration based on mutual information.</description><subject>Bioengineering</subject><subject>CBCT</subject><subject>Life Sciences</subject><subject>MRI</subject><subject>Multimodal image registration</subject><subject>Radiotherapy</subject><subject>Synthetic-CT</subject><subject>Unsupervised generation</subject><issn>0262-8856</issn><issn>1872-8138</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp9kMFPwyAYxYnRxDn9Dzxw9dAJhRZ6MVkWdUuWeNEzofC1Y-noAt2S_veydPHoCfh47-V9P4SeKVlQQsvX_cId9NnFRU5ynkYF5ewGzagUeSYpk7doRvIy3WVR3qOHGPeEEEFENUN2460LYAZsoenDQdcd4JTWAg7QujgEPbje41N0vsVx9MMOBmeukhY8JAFYXI_45OPpCCH1SG8LcMQd6OCT7xHdNbqL8HQ95-jn4_17tc62X5-b1XKbmZxWQ5YLzTmnkKrVxDJdyYLSuqGMcC2KxsoqfQLU0tacV1IYYgwTVaEtqZgQls3Ry5S70506htQxjKrXTq2XW3WZEV4WiRg906Tlk9aEPsYAzZ-BEnWhqvZqoqouVNVENdneJhukPc4OgorGgTcwUVS2d_8H_AKaQIN9</recordid><startdate>20240801</startdate><enddate>20240801</enddate><creator>Hémon, Cédric</creator><creator>Texier, Blanche</creator><creator>Chourak, Hilda</creator><creator>Simon, Antoine</creator><creator>Bessières, Igor</creator><creator>de Crevoisier, Renaud</creator><creator>Castelli, Joël</creator><creator>Lafond, Caroline</creator><creator>Barateau, Anaïs</creator><creator>Nunes, Jean-Claude</creator><general>Elsevier B.V</general><general>Elsevier</general><scope>6I.</scope><scope>AAFTH</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>1XC</scope><scope>VOOES</scope><orcidid>https://orcid.org/0000-0001-6128-1836</orcidid><orcidid>https://orcid.org/0000-0001-6560-1518</orcidid><orcidid>https://orcid.org/0000-0002-7186-5568</orcidid><orcidid>https://orcid.org/0000-0002-6095-4397</orcidid></search><sort><creationdate>20240801</creationdate><title>Indirect deformable image registration using synthetic image generated by unsupervised deep learning</title><author>Hémon, Cédric ; Texier, Blanche ; Chourak, Hilda ; Simon, Antoine ; Bessières, Igor ; de Crevoisier, Renaud ; Castelli, Joël ; Lafond, Caroline ; Barateau, Anaïs ; Nunes, Jean-Claude</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c219t-27a4441e070b0d3a98511bf1304a75fd8941eeeb8db44987c0cc3795ad09377d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Bioengineering</topic><topic>CBCT</topic><topic>Life Sciences</topic><topic>MRI</topic><topic>Multimodal image registration</topic><topic>Radiotherapy</topic><topic>Synthetic-CT</topic><topic>Unsupervised generation</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Hémon, Cédric</creatorcontrib><creatorcontrib>Texier, Blanche</creatorcontrib><creatorcontrib>Chourak, Hilda</creatorcontrib><creatorcontrib>Simon, Antoine</creatorcontrib><creatorcontrib>Bessières, Igor</creatorcontrib><creatorcontrib>de Crevoisier, Renaud</creatorcontrib><creatorcontrib>Castelli, Joël</creatorcontrib><creatorcontrib>Lafond, Caroline</creatorcontrib><creatorcontrib>Barateau, Anaïs</creatorcontrib><creatorcontrib>Nunes, Jean-Claude</creatorcontrib><collection>ScienceDirect Open Access Titles</collection><collection>Elsevier:ScienceDirect:Open Access</collection><collection>CrossRef</collection><collection>Hyper Article en Ligne (HAL)</collection><collection>Hyper Article en Ligne (HAL) (Open Access)</collection><jtitle>Image and vision computing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hémon, Cédric</au><au>Texier, Blanche</au><au>Chourak, Hilda</au><au>Simon, Antoine</au><au>Bessières, Igor</au><au>de Crevoisier, Renaud</au><au>Castelli, Joël</au><au>Lafond, Caroline</au><au>Barateau, Anaïs</au><au>Nunes, Jean-Claude</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Indirect deformable image registration using synthetic image generated by unsupervised deep learning</atitle><jtitle>Image and vision computing</jtitle><date>2024-08-01</date><risdate>2024</risdate><volume>148</volume><spage>105143</spage><pages>105143-</pages><artnum>105143</artnum><issn>0262-8856</issn><eissn>1872-8138</eissn><abstract>3D image registration is now common in many medical domains. Multimodal registration implies the use of different imaging modalities, which results in lower accuracy compared to monomodal registration. The aim of this study was to propose a novel approach for deformable image registration (DIR) that incorporates an unsupervised deep learning (DL)-based generation step. The objective was to reduce the challenge of multimodal registration to monomodal registration. Two datasets from prostate radiotherapy patients were used to evaluate the proposed method. The first dataset consisted of Computed Tomography (CT)/ Cone Beam Computed Tomography (CBCT) pairs from 23 patients using different CBCT devices. The second dataset included Magnetic Resonance Imaging (MRI)/CT pairs from two different care centers, utilizing different MRI devices (0.35 T MRIdian MR-Linac, 1.5 T GE lightspeed MRI). Following a preprocessing step essential for ensuring DL synthesis accuracy and standardizing the database, synthetic CTs (sCTreg) were generated using an unsupervised conditional Generative Adversarial Network (cGAN). The generated sCTs from CBCT or MRI were then utilized for deformable registration with CT scans. This registration method was compared to three standard methods: rigid registration, Elastix registration based on BSplines, and VoxelMorph-based registration (applied exclusively to CBCT/CT). The endpoints of comparison were the dice coefficients calculated between delineated structures for both datasets. For both datasets, intermediary sCT generation provided the highest dice coefficients. Dices reached 0.85, 0.85 and 0.75 for the prostate, bladder and rectum for the dataset 1 and 0.90, 0.95 and 0.87 respectively for the dataset 2. When the sCT were not used, dices reached 0.66, 0.78, 0.66 for the dataset 1 and 0.93, 0.87 and 0.84 for the dataset 2. Furthermore, the evaluation of the impact of registration on sCT generation showed that lower Mean Absolute Errors were obtained when the registration was conducted with a sCT. Using unsupervised deep learning to synthesize intermediate sCT has led to improved registration accuracy in radiotherapy applications employing two distinct imaging modalities. [Display omitted] •We translated a multimodal CBCT-MR/CT registration into a sCT/CT registration.•The unsupervised synthesis method was based on a cGAN using novel perceptual loss.•The best registration accuracy was obtained via synthetic image generation step.•The content loss improved conventional registration based on mutual information.</abstract><pub>Elsevier B.V</pub><doi>10.1016/j.imavis.2024.105143</doi><orcidid>https://orcid.org/0000-0001-6128-1836</orcidid><orcidid>https://orcid.org/0000-0001-6560-1518</orcidid><orcidid>https://orcid.org/0000-0002-7186-5568</orcidid><orcidid>https://orcid.org/0000-0002-6095-4397</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0262-8856
ispartof Image and vision computing, 2024-08, Vol.148, p.105143, Article 105143
issn 0262-8856
1872-8138
language eng
recordid cdi_hal_primary_oai_HAL_hal_04651011v1
source ScienceDirect Freedom Collection
subjects Bioengineering
CBCT
Life Sciences
MRI
Multimodal image registration
Radiotherapy
Synthetic-CT
Unsupervised generation
title Indirect deformable image registration using synthetic image generated by unsupervised deep learning
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T18%3A24%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-elsevier_hal_p&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Indirect%20deformable%20image%20registration%20using%20synthetic%20image%20generated%20by%20unsupervised%20deep%20learning&rft.jtitle=Image%20and%20vision%20computing&rft.au=H%C3%A9mon,%20C%C3%A9dric&rft.date=2024-08-01&rft.volume=148&rft.spage=105143&rft.pages=105143-&rft.artnum=105143&rft.issn=0262-8856&rft.eissn=1872-8138&rft_id=info:doi/10.1016/j.imavis.2024.105143&rft_dat=%3Celsevier_hal_p%3ES0262885624002476%3C/elsevier_hal_p%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c219t-27a4441e070b0d3a98511bf1304a75fd8941eeeb8db44987c0cc3795ad09377d3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true