Loading…
Feature-targeted deep learning framework for pulmonary tumorous Cone-beam CT (CBCT) enhancement with multi-task customized perceptual loss and feature-guided CycleGAN
Thoracic Cone-beam computed tomography (CBCT) is routinely collected during image-guided radiation therapy (IGRT) to provide updated patient anatomy information for lung cancer treatments. However, CBCT images often suffer from streaking artifacts and noise caused by under-rate sampling projections...
Saved in:
Published in: | Computerized medical imaging and graphics 2025-04, Vol.121, p.102487, Article 102487 |
---|---|
Main Authors: | , , , , , , , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | cdi_FETCH-LOGICAL-c1665-1c3827b48d642d4c36f8fd002f072e50ce6b49b460e2029400f45552da29696a3 |
container_end_page | |
container_issue | |
container_start_page | 102487 |
container_title | Computerized medical imaging and graphics |
container_volume | 121 |
creator | Zhu, Jiarui Sun, Hongfei Chen, Weixing Zhi, Shaohua Liu, Chenyang Zhao, Mayang Zhang, Yuanpeng Zhou, Ta Lam, Yu Lap Peng, Tao Qin, Jing Zhao, Lina Cai, Jing Ren, Ge |
description | Thoracic Cone-beam computed tomography (CBCT) is routinely collected during image-guided radiation therapy (IGRT) to provide updated patient anatomy information for lung cancer treatments. However, CBCT images often suffer from streaking artifacts and noise caused by under-rate sampling projections and low-dose exposure, resulting in loss of lung anatomy which contains crucial pulmonary tumorous and functional information. While recent deep learning-based CBCT enhancement methods have shown promising results in suppressing artifacts, they have limited performance on preserving anatomical details containing crucial tumorous information due to lack of targeted guidance. To address this issue, we propose a novel feature-targeted deep learning framework which generates ultra-quality pulmonary imaging from CBCT of lung cancer patients via a multi-task customized feature-to-feature perceptual loss function and a feature-guided CycleGAN. The framework comprises two main components: a multi-task learning feature-selection network (MTFS-Net) for building up a customized feature-to-feature perceptual loss function (CFP-loss); and a feature-guided CycleGan network. Our experiments showed that the proposed framework can generate synthesized CT (sCT) images for the lung that achieved a high similarity to CT images, with an average SSIM index of 0.9747 and an average PSNR index of 38.5995 globally, and an average Pearman’s coefficient of 0.8929 within the tumor region on multi-institutional datasets. The sCT images also achieved visually pleasing performance with effective artifacts suppression, noise reduction, and distinctive anatomical details preservation. Functional imaging tests further demonstrated the pulmonary texture correction performance of the sCT images, and the similarity of the functional imaging generated from sCT and CT images has reached an average DSC value of 0.9147, SCC value of 0.9615 and R value of 0.9661. Comparison experiments with pixel-to-pixel loss also showed that the proposed perceptual loss significantly enhances the performance of involved generative models. Our experiment results indicate that the proposed framework outperforms the state-of-the-art models for pulmonary CBCT enhancement. This framework holds great promise for generating high-quality pulmonary imaging from CBCT that is suitable for supporting further analysis of lung cancer treatment.
•The first deep learning framework for enhancing pulmonary CBCT images with fine tumorou |
doi_str_mv | 10.1016/j.compmedimag.2024.102487 |
format | article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_3162576812</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0895611124001642</els_id><sourcerecordid>3162576812</sourcerecordid><originalsourceid>FETCH-LOGICAL-c1665-1c3827b48d642d4c36f8fd002f072e50ce6b49b460e2029400f45552da29696a3</originalsourceid><addsrcrecordid>eNqNkc9u1DAQxi0EotvCKyBzK4cstjdx4mOJaEGq4LKcLceebL2N4-A_rZYH4jnrahfEkZOl8TfzzTc_hN5TsqaE8o_7tfZucWCsU7s1I6wudVZ37Qu0ol0rKtK29CVakU40FaeUnqHzGPeEEEZa-hqdbUQnqGiaFfp9DSrlAFVSYQcJDDYAC55AhdnOOzwG5eDRh3s8-oCXPDk_q3DAKTsffI649zNUAyiH-y2-7D_12w8Y5js1a3AwJ_xo0x12eUq2WMR7rHNM3tlfxWmBoGFJWU148jFiNRs8ntbZZWuKpD_oCW6uvr1Br0Y1RXh7ei_Qj-vP2_5Ldfv95mt_dVtpynlTUb3pWDvUneE1M7Xe8LEbTYk9kpZBQzTwoRZDzQmUq4makLFumoYZxQQXXG0u0OVx7hL8zwwxSWejhmlSM5S0ckM5a1reUVak4ijVoSwfYJRLKDzCQVIinzHJvfwHk3zGJI-YSu-7k00eyv_fzj9ciqA_CqCEfbAQZNQWyk2NDaCTNN7-h80Th8urrQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3162576812</pqid></control><display><type>article</type><title>Feature-targeted deep learning framework for pulmonary tumorous Cone-beam CT (CBCT) enhancement with multi-task customized perceptual loss and feature-guided CycleGAN</title><source>ScienceDirect Freedom Collection</source><creator>Zhu, Jiarui ; Sun, Hongfei ; Chen, Weixing ; Zhi, Shaohua ; Liu, Chenyang ; Zhao, Mayang ; Zhang, Yuanpeng ; Zhou, Ta ; Lam, Yu Lap ; Peng, Tao ; Qin, Jing ; Zhao, Lina ; Cai, Jing ; Ren, Ge</creator><creatorcontrib>Zhu, Jiarui ; Sun, Hongfei ; Chen, Weixing ; Zhi, Shaohua ; Liu, Chenyang ; Zhao, Mayang ; Zhang, Yuanpeng ; Zhou, Ta ; Lam, Yu Lap ; Peng, Tao ; Qin, Jing ; Zhao, Lina ; Cai, Jing ; Ren, Ge</creatorcontrib><description>Thoracic Cone-beam computed tomography (CBCT) is routinely collected during image-guided radiation therapy (IGRT) to provide updated patient anatomy information for lung cancer treatments. However, CBCT images often suffer from streaking artifacts and noise caused by under-rate sampling projections and low-dose exposure, resulting in loss of lung anatomy which contains crucial pulmonary tumorous and functional information. While recent deep learning-based CBCT enhancement methods have shown promising results in suppressing artifacts, they have limited performance on preserving anatomical details containing crucial tumorous information due to lack of targeted guidance. To address this issue, we propose a novel feature-targeted deep learning framework which generates ultra-quality pulmonary imaging from CBCT of lung cancer patients via a multi-task customized feature-to-feature perceptual loss function and a feature-guided CycleGAN. The framework comprises two main components: a multi-task learning feature-selection network (MTFS-Net) for building up a customized feature-to-feature perceptual loss function (CFP-loss); and a feature-guided CycleGan network. Our experiments showed that the proposed framework can generate synthesized CT (sCT) images for the lung that achieved a high similarity to CT images, with an average SSIM index of 0.9747 and an average PSNR index of 38.5995 globally, and an average Pearman’s coefficient of 0.8929 within the tumor region on multi-institutional datasets. The sCT images also achieved visually pleasing performance with effective artifacts suppression, noise reduction, and distinctive anatomical details preservation. Functional imaging tests further demonstrated the pulmonary texture correction performance of the sCT images, and the similarity of the functional imaging generated from sCT and CT images has reached an average DSC value of 0.9147, SCC value of 0.9615 and R value of 0.9661. Comparison experiments with pixel-to-pixel loss also showed that the proposed perceptual loss significantly enhances the performance of involved generative models. Our experiment results indicate that the proposed framework outperforms the state-of-the-art models for pulmonary CBCT enhancement. This framework holds great promise for generating high-quality pulmonary imaging from CBCT that is suitable for supporting further analysis of lung cancer treatment.
•The first deep learning framework for enhancing pulmonary CBCT images with fine tumorous and functional information preservation.•Unique multi-task feature-selection strategy for customizable perceptual loss function building up.•Improved performance on lung CBCT-to-CT translation compared to pixel-to-pixel models.</description><identifier>ISSN: 0895-6111</identifier><identifier>ISSN: 1879-0771</identifier><identifier>EISSN: 1879-0771</identifier><identifier>DOI: 10.1016/j.compmedimag.2024.102487</identifier><identifier>PMID: 39891955</identifier><language>eng</language><publisher>United States: Elsevier Ltd</publisher><subject>Cone-beam computed tomography ; Image-to-image translation ; Multi-task learning ; Perceptual loss ; Pulmonary imaging</subject><ispartof>Computerized medical imaging and graphics, 2025-04, Vol.121, p.102487, Article 102487</ispartof><rights>2025 The Authors</rights><rights>Copyright © 2025. Published by Elsevier Ltd.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c1665-1c3827b48d642d4c36f8fd002f072e50ce6b49b460e2029400f45552da29696a3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27901,27902</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/39891955$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhu, Jiarui</creatorcontrib><creatorcontrib>Sun, Hongfei</creatorcontrib><creatorcontrib>Chen, Weixing</creatorcontrib><creatorcontrib>Zhi, Shaohua</creatorcontrib><creatorcontrib>Liu, Chenyang</creatorcontrib><creatorcontrib>Zhao, Mayang</creatorcontrib><creatorcontrib>Zhang, Yuanpeng</creatorcontrib><creatorcontrib>Zhou, Ta</creatorcontrib><creatorcontrib>Lam, Yu Lap</creatorcontrib><creatorcontrib>Peng, Tao</creatorcontrib><creatorcontrib>Qin, Jing</creatorcontrib><creatorcontrib>Zhao, Lina</creatorcontrib><creatorcontrib>Cai, Jing</creatorcontrib><creatorcontrib>Ren, Ge</creatorcontrib><title>Feature-targeted deep learning framework for pulmonary tumorous Cone-beam CT (CBCT) enhancement with multi-task customized perceptual loss and feature-guided CycleGAN</title><title>Computerized medical imaging and graphics</title><addtitle>Comput Med Imaging Graph</addtitle><description>Thoracic Cone-beam computed tomography (CBCT) is routinely collected during image-guided radiation therapy (IGRT) to provide updated patient anatomy information for lung cancer treatments. However, CBCT images often suffer from streaking artifacts and noise caused by under-rate sampling projections and low-dose exposure, resulting in loss of lung anatomy which contains crucial pulmonary tumorous and functional information. While recent deep learning-based CBCT enhancement methods have shown promising results in suppressing artifacts, they have limited performance on preserving anatomical details containing crucial tumorous information due to lack of targeted guidance. To address this issue, we propose a novel feature-targeted deep learning framework which generates ultra-quality pulmonary imaging from CBCT of lung cancer patients via a multi-task customized feature-to-feature perceptual loss function and a feature-guided CycleGAN. The framework comprises two main components: a multi-task learning feature-selection network (MTFS-Net) for building up a customized feature-to-feature perceptual loss function (CFP-loss); and a feature-guided CycleGan network. Our experiments showed that the proposed framework can generate synthesized CT (sCT) images for the lung that achieved a high similarity to CT images, with an average SSIM index of 0.9747 and an average PSNR index of 38.5995 globally, and an average Pearman’s coefficient of 0.8929 within the tumor region on multi-institutional datasets. The sCT images also achieved visually pleasing performance with effective artifacts suppression, noise reduction, and distinctive anatomical details preservation. Functional imaging tests further demonstrated the pulmonary texture correction performance of the sCT images, and the similarity of the functional imaging generated from sCT and CT images has reached an average DSC value of 0.9147, SCC value of 0.9615 and R value of 0.9661. Comparison experiments with pixel-to-pixel loss also showed that the proposed perceptual loss significantly enhances the performance of involved generative models. Our experiment results indicate that the proposed framework outperforms the state-of-the-art models for pulmonary CBCT enhancement. This framework holds great promise for generating high-quality pulmonary imaging from CBCT that is suitable for supporting further analysis of lung cancer treatment.
•The first deep learning framework for enhancing pulmonary CBCT images with fine tumorous and functional information preservation.•Unique multi-task feature-selection strategy for customizable perceptual loss function building up.•Improved performance on lung CBCT-to-CT translation compared to pixel-to-pixel models.</description><subject>Cone-beam computed tomography</subject><subject>Image-to-image translation</subject><subject>Multi-task learning</subject><subject>Perceptual loss</subject><subject>Pulmonary imaging</subject><issn>0895-6111</issn><issn>1879-0771</issn><issn>1879-0771</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2025</creationdate><recordtype>article</recordtype><recordid>eNqNkc9u1DAQxi0EotvCKyBzK4cstjdx4mOJaEGq4LKcLceebL2N4-A_rZYH4jnrahfEkZOl8TfzzTc_hN5TsqaE8o_7tfZucWCsU7s1I6wudVZ37Qu0ol0rKtK29CVakU40FaeUnqHzGPeEEEZa-hqdbUQnqGiaFfp9DSrlAFVSYQcJDDYAC55AhdnOOzwG5eDRh3s8-oCXPDk_q3DAKTsffI649zNUAyiH-y2-7D_12w8Y5js1a3AwJ_xo0x12eUq2WMR7rHNM3tlfxWmBoGFJWU148jFiNRs8ntbZZWuKpD_oCW6uvr1Br0Y1RXh7ei_Qj-vP2_5Ldfv95mt_dVtpynlTUb3pWDvUneE1M7Xe8LEbTYk9kpZBQzTwoRZDzQmUq4makLFumoYZxQQXXG0u0OVx7hL8zwwxSWejhmlSM5S0ckM5a1reUVak4ijVoSwfYJRLKDzCQVIinzHJvfwHk3zGJI-YSu-7k00eyv_fzj9ciqA_CqCEfbAQZNQWyk2NDaCTNN7-h80Th8urrQ</recordid><startdate>202504</startdate><enddate>202504</enddate><creator>Zhu, Jiarui</creator><creator>Sun, Hongfei</creator><creator>Chen, Weixing</creator><creator>Zhi, Shaohua</creator><creator>Liu, Chenyang</creator><creator>Zhao, Mayang</creator><creator>Zhang, Yuanpeng</creator><creator>Zhou, Ta</creator><creator>Lam, Yu Lap</creator><creator>Peng, Tao</creator><creator>Qin, Jing</creator><creator>Zhao, Lina</creator><creator>Cai, Jing</creator><creator>Ren, Ge</creator><general>Elsevier Ltd</general><scope>6I.</scope><scope>AAFTH</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope></search><sort><creationdate>202504</creationdate><title>Feature-targeted deep learning framework for pulmonary tumorous Cone-beam CT (CBCT) enhancement with multi-task customized perceptual loss and feature-guided CycleGAN</title><author>Zhu, Jiarui ; Sun, Hongfei ; Chen, Weixing ; Zhi, Shaohua ; Liu, Chenyang ; Zhao, Mayang ; Zhang, Yuanpeng ; Zhou, Ta ; Lam, Yu Lap ; Peng, Tao ; Qin, Jing ; Zhao, Lina ; Cai, Jing ; Ren, Ge</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c1665-1c3827b48d642d4c36f8fd002f072e50ce6b49b460e2029400f45552da29696a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2025</creationdate><topic>Cone-beam computed tomography</topic><topic>Image-to-image translation</topic><topic>Multi-task learning</topic><topic>Perceptual loss</topic><topic>Pulmonary imaging</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhu, Jiarui</creatorcontrib><creatorcontrib>Sun, Hongfei</creatorcontrib><creatorcontrib>Chen, Weixing</creatorcontrib><creatorcontrib>Zhi, Shaohua</creatorcontrib><creatorcontrib>Liu, Chenyang</creatorcontrib><creatorcontrib>Zhao, Mayang</creatorcontrib><creatorcontrib>Zhang, Yuanpeng</creatorcontrib><creatorcontrib>Zhou, Ta</creatorcontrib><creatorcontrib>Lam, Yu Lap</creatorcontrib><creatorcontrib>Peng, Tao</creatorcontrib><creatorcontrib>Qin, Jing</creatorcontrib><creatorcontrib>Zhao, Lina</creatorcontrib><creatorcontrib>Cai, Jing</creatorcontrib><creatorcontrib>Ren, Ge</creatorcontrib><collection>ScienceDirect Open Access Titles</collection><collection>Elsevier:ScienceDirect:Open Access</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Computerized medical imaging and graphics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhu, Jiarui</au><au>Sun, Hongfei</au><au>Chen, Weixing</au><au>Zhi, Shaohua</au><au>Liu, Chenyang</au><au>Zhao, Mayang</au><au>Zhang, Yuanpeng</au><au>Zhou, Ta</au><au>Lam, Yu Lap</au><au>Peng, Tao</au><au>Qin, Jing</au><au>Zhao, Lina</au><au>Cai, Jing</au><au>Ren, Ge</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Feature-targeted deep learning framework for pulmonary tumorous Cone-beam CT (CBCT) enhancement with multi-task customized perceptual loss and feature-guided CycleGAN</atitle><jtitle>Computerized medical imaging and graphics</jtitle><addtitle>Comput Med Imaging Graph</addtitle><date>2025-04</date><risdate>2025</risdate><volume>121</volume><spage>102487</spage><pages>102487-</pages><artnum>102487</artnum><issn>0895-6111</issn><issn>1879-0771</issn><eissn>1879-0771</eissn><abstract>Thoracic Cone-beam computed tomography (CBCT) is routinely collected during image-guided radiation therapy (IGRT) to provide updated patient anatomy information for lung cancer treatments. However, CBCT images often suffer from streaking artifacts and noise caused by under-rate sampling projections and low-dose exposure, resulting in loss of lung anatomy which contains crucial pulmonary tumorous and functional information. While recent deep learning-based CBCT enhancement methods have shown promising results in suppressing artifacts, they have limited performance on preserving anatomical details containing crucial tumorous information due to lack of targeted guidance. To address this issue, we propose a novel feature-targeted deep learning framework which generates ultra-quality pulmonary imaging from CBCT of lung cancer patients via a multi-task customized feature-to-feature perceptual loss function and a feature-guided CycleGAN. The framework comprises two main components: a multi-task learning feature-selection network (MTFS-Net) for building up a customized feature-to-feature perceptual loss function (CFP-loss); and a feature-guided CycleGan network. Our experiments showed that the proposed framework can generate synthesized CT (sCT) images for the lung that achieved a high similarity to CT images, with an average SSIM index of 0.9747 and an average PSNR index of 38.5995 globally, and an average Pearman’s coefficient of 0.8929 within the tumor region on multi-institutional datasets. The sCT images also achieved visually pleasing performance with effective artifacts suppression, noise reduction, and distinctive anatomical details preservation. Functional imaging tests further demonstrated the pulmonary texture correction performance of the sCT images, and the similarity of the functional imaging generated from sCT and CT images has reached an average DSC value of 0.9147, SCC value of 0.9615 and R value of 0.9661. Comparison experiments with pixel-to-pixel loss also showed that the proposed perceptual loss significantly enhances the performance of involved generative models. Our experiment results indicate that the proposed framework outperforms the state-of-the-art models for pulmonary CBCT enhancement. This framework holds great promise for generating high-quality pulmonary imaging from CBCT that is suitable for supporting further analysis of lung cancer treatment.
•The first deep learning framework for enhancing pulmonary CBCT images with fine tumorous and functional information preservation.•Unique multi-task feature-selection strategy for customizable perceptual loss function building up.•Improved performance on lung CBCT-to-CT translation compared to pixel-to-pixel models.</abstract><cop>United States</cop><pub>Elsevier Ltd</pub><pmid>39891955</pmid><doi>10.1016/j.compmedimag.2024.102487</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0895-6111 |
ispartof | Computerized medical imaging and graphics, 2025-04, Vol.121, p.102487, Article 102487 |
issn | 0895-6111 1879-0771 1879-0771 |
language | eng |
recordid | cdi_proquest_miscellaneous_3162576812 |
source | ScienceDirect Freedom Collection |
subjects | Cone-beam computed tomography Image-to-image translation Multi-task learning Perceptual loss Pulmonary imaging |
title | Feature-targeted deep learning framework for pulmonary tumorous Cone-beam CT (CBCT) enhancement with multi-task customized perceptual loss and feature-guided CycleGAN |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-23T20%3A14%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Feature-targeted%20deep%20learning%20framework%20for%20pulmonary%20tumorous%20Cone-beam%20CT%20(CBCT)%20enhancement%20with%20multi-task%20customized%20perceptual%20loss%20and%20feature-guided%20CycleGAN&rft.jtitle=Computerized%20medical%20imaging%20and%20graphics&rft.au=Zhu,%20Jiarui&rft.date=2025-04&rft.volume=121&rft.spage=102487&rft.pages=102487-&rft.artnum=102487&rft.issn=0895-6111&rft.eissn=1879-0771&rft_id=info:doi/10.1016/j.compmedimag.2024.102487&rft_dat=%3Cproquest_cross%3E3162576812%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c1665-1c3827b48d642d4c36f8fd002f072e50ce6b49b460e2029400f45552da29696a3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3162576812&rft_id=info:pmid/39891955&rfr_iscdi=true |