Loading…
Indirect deformable image registration using synthetic image generated by unsupervised deep learning
3D image registration is now common in many medical domains. Multimodal registration implies the use of different imaging modalities, which results in lower accuracy compared to monomodal registration. The aim of this study was to propose a novel approach for deformable image registration (DIR) that...
Saved in:
Published in: | Image and vision computing 2024-08, Vol.148, p.105143, Article 105143 |
---|---|
Main Authors: | , , , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | 3D image registration is now common in many medical domains. Multimodal registration implies the use of different imaging modalities, which results in lower accuracy compared to monomodal registration. The aim of this study was to propose a novel approach for deformable image registration (DIR) that incorporates an unsupervised deep learning (DL)-based generation step. The objective was to reduce the challenge of multimodal registration to monomodal registration.
Two datasets from prostate radiotherapy patients were used to evaluate the proposed method. The first dataset consisted of Computed Tomography (CT)/ Cone Beam Computed Tomography (CBCT) pairs from 23 patients using different CBCT devices. The second dataset included Magnetic Resonance Imaging (MRI)/CT pairs from two different care centers, utilizing different MRI devices (0.35 T MRIdian MR-Linac, 1.5 T GE lightspeed MRI). Following a preprocessing step essential for ensuring DL synthesis accuracy and standardizing the database, synthetic CTs (sCTreg) were generated using an unsupervised conditional Generative Adversarial Network (cGAN). The generated sCTs from CBCT or MRI were then utilized for deformable registration with CT scans. This registration method was compared to three standard methods: rigid registration, Elastix registration based on BSplines, and VoxelMorph-based registration (applied exclusively to CBCT/CT). The endpoints of comparison were the dice coefficients calculated between delineated structures for both datasets.
For both datasets, intermediary sCT generation provided the highest dice coefficients. Dices reached 0.85, 0.85 and 0.75 for the prostate, bladder and rectum for the dataset 1 and 0.90, 0.95 and 0.87 respectively for the dataset 2. When the sCT were not used, dices reached 0.66, 0.78, 0.66 for the dataset 1 and 0.93, 0.87 and 0.84 for the dataset 2. Furthermore, the evaluation of the impact of registration on sCT generation showed that lower Mean Absolute Errors were obtained when the registration was conducted with a sCT.
Using unsupervised deep learning to synthesize intermediate sCT has led to improved registration accuracy in radiotherapy applications employing two distinct imaging modalities.
[Display omitted]
•We translated a multimodal CBCT-MR/CT registration into a sCT/CT registration.•The unsupervised synthesis method was based on a cGAN using novel perceptual loss.•The best registration accuracy was obtained via synthetic image generation step.•The content |
---|---|
ISSN: | 0262-8856 1872-8138 |
DOI: | 10.1016/j.imavis.2024.105143 |