Loading…
An Improved Hybrid Network With a Transformer Module for Medical Image Fusion
Medical image fusion technology is an essential component of computer-aided diagnosis, which aims to extract useful cross-modality cues from raw signals to generate high-quality fused images. Many advanced methods focus on designing fusion rules, but there is still room for improvement in cross-moda...
Saved in:
Published in: | IEEE journal of biomedical and health informatics 2023-07, Vol.27 (7), p.1-12 |
---|---|
Main Authors: | , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c350t-6d8e6f56ab8757c7b8f3922618bfc523daa7e13bea131d784d8dcf575fcff43f3 |
---|---|
cites | cdi_FETCH-LOGICAL-c350t-6d8e6f56ab8757c7b8f3922618bfc523daa7e13bea131d784d8dcf575fcff43f3 |
container_end_page | 12 |
container_issue | 7 |
container_start_page | 1 |
container_title | IEEE journal of biomedical and health informatics |
container_volume | 27 |
creator | Liu, Yanyu Zang, Yongsheng Zhou, Dongming Cao, Jinde Nie, Rencan Hou, Ruichao Ding, Zhaisheng Mei, Jiatian |
description | Medical image fusion technology is an essential component of computer-aided diagnosis, which aims to extract useful cross-modality cues from raw signals to generate high-quality fused images. Many advanced methods focus on designing fusion rules, but there is still room for improvement in cross-modal information extraction. To this end, we propose a novel encoder-decoder architecture with three technical novelties. First, we divide the medical images into two attributes, namely pixel intensity distribution attributes and texture attributes, and thus design two self-reconstruction tasks to mine as many specific features as possible. Second, we propose a hybrid network combining a CNN and a transformer module to model both long-range and short-range dependencies. Moreover, we construct a self-adaptive weight fusion rule that automatically measures salient features. Extensive experiments on a public medical image dataset and other multimodal datasets show that the proposed method achieves satisfactory performance. |
doi_str_mv | 10.1109/JBHI.2023.3264819 |
format | article |
fullrecord | <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_proquest_journals_2831508287</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10093923</ieee_id><sourcerecordid>2831508287</sourcerecordid><originalsourceid>FETCH-LOGICAL-c350t-6d8e6f56ab8757c7b8f3922618bfc523daa7e13bea131d784d8dcf575fcff43f3</originalsourceid><addsrcrecordid>eNpdkEtLAzEYRYMoWrQ_QBAJuHEzNY-ZJLNUUVuxuqm4DJnJFx2dR01mFP-9Ka0iZpMH516-HIQOKZlQSvKz24vpbMII4xPORKpovoVGjAqVMEbU9s-Z5ukeGofwSuJS8SkXu2iPy5ijgo7Q_LzFs2bpuw-wePpV-Mrie-g_O_-Gn6r-BRu88KYNrvMNeDzv7FADjjc8B1uVpo5p8wz4eghV1x6gHWfqAOPNvo8er68Wl9Pk7uFmdnl-l5Q8I30irALhMmEKJTNZykI5njMmqCpcmTFujZFAeQGGcmqlSq2ypctk5krnUu74Pjpd98bB3wcIvW6qUEJdmxa6IWgmcyWJkoJG9OQf-toNvo3TaaY4zYhiSkaKrqnSdyF4cHrpq8b4L02JXunWK916pVtvdMfM8aZ5KBqwv4kfuRE4WgMVAPwpJHn8LeffxROCLQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2831508287</pqid></control><display><type>article</type><title>An Improved Hybrid Network With a Transformer Module for Medical Image Fusion</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Liu, Yanyu ; Zang, Yongsheng ; Zhou, Dongming ; Cao, Jinde ; Nie, Rencan ; Hou, Ruichao ; Ding, Zhaisheng ; Mei, Jiatian</creator><creatorcontrib>Liu, Yanyu ; Zang, Yongsheng ; Zhou, Dongming ; Cao, Jinde ; Nie, Rencan ; Hou, Ruichao ; Ding, Zhaisheng ; Mei, Jiatian</creatorcontrib><description>Medical image fusion technology is an essential component of computer-aided diagnosis, which aims to extract useful cross-modality cues from raw signals to generate high-quality fused images. Many advanced methods focus on designing fusion rules, but there is still room for improvement in cross-modal information extraction. To this end, we propose a novel encoder-decoder architecture with three technical novelties. First, we divide the medical images into two attributes, namely pixel intensity distribution attributes and texture attributes, and thus design two self-reconstruction tasks to mine as many specific features as possible. Second, we propose a hybrid network combining a CNN and a transformer module to model both long-range and short-range dependencies. Moreover, we construct a self-adaptive weight fusion rule that automatically measures salient features. Extensive experiments on a public medical image dataset and other multimodal datasets show that the proposed method achieves satisfactory performance.</description><identifier>ISSN: 2168-2194</identifier><identifier>ISSN: 2168-2208</identifier><identifier>EISSN: 2168-2208</identifier><identifier>DOI: 10.1109/JBHI.2023.3264819</identifier><identifier>PMID: 37023161</identifier><identifier>CODEN: IJBHA9</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Coders ; Computer vision ; Datasets ; Diagnosis, Computer-Assisted ; Discrete wavelet transforms ; Electric Power Supplies ; Encoders-Decoders ; Feature extraction ; Humans ; Image fusion ; Image Processing, Computer-Assisted ; Image quality ; Image reconstruction ; Information retrieval ; Information Storage and Retrieval ; Medical diagnosis ; Medical diagnostic imaging ; Medical imaging ; Modules ; self-adaptive weight fusion ; self-reconstruction ; Signal quality ; transformer ; Transformers ; Transforms</subject><ispartof>IEEE journal of biomedical and health informatics, 2023-07, Vol.27 (7), p.1-12</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c350t-6d8e6f56ab8757c7b8f3922618bfc523daa7e13bea131d784d8dcf575fcff43f3</citedby><cites>FETCH-LOGICAL-c350t-6d8e6f56ab8757c7b8f3922618bfc523daa7e13bea131d784d8dcf575fcff43f3</cites><orcidid>0000-0003-3133-7119 ; 0000-0003-0568-1231 ; 0000-0002-7355-0893 ; 0000-0003-0139-9415 ; 0000-0003-4180-485X ; 0000-0001-6056-7656 ; 0000-0001-8111-7339</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10093923$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/37023161$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Liu, Yanyu</creatorcontrib><creatorcontrib>Zang, Yongsheng</creatorcontrib><creatorcontrib>Zhou, Dongming</creatorcontrib><creatorcontrib>Cao, Jinde</creatorcontrib><creatorcontrib>Nie, Rencan</creatorcontrib><creatorcontrib>Hou, Ruichao</creatorcontrib><creatorcontrib>Ding, Zhaisheng</creatorcontrib><creatorcontrib>Mei, Jiatian</creatorcontrib><title>An Improved Hybrid Network With a Transformer Module for Medical Image Fusion</title><title>IEEE journal of biomedical and health informatics</title><addtitle>JBHI</addtitle><addtitle>IEEE J Biomed Health Inform</addtitle><description>Medical image fusion technology is an essential component of computer-aided diagnosis, which aims to extract useful cross-modality cues from raw signals to generate high-quality fused images. Many advanced methods focus on designing fusion rules, but there is still room for improvement in cross-modal information extraction. To this end, we propose a novel encoder-decoder architecture with three technical novelties. First, we divide the medical images into two attributes, namely pixel intensity distribution attributes and texture attributes, and thus design two self-reconstruction tasks to mine as many specific features as possible. Second, we propose a hybrid network combining a CNN and a transformer module to model both long-range and short-range dependencies. Moreover, we construct a self-adaptive weight fusion rule that automatically measures salient features. Extensive experiments on a public medical image dataset and other multimodal datasets show that the proposed method achieves satisfactory performance.</description><subject>Coders</subject><subject>Computer vision</subject><subject>Datasets</subject><subject>Diagnosis, Computer-Assisted</subject><subject>Discrete wavelet transforms</subject><subject>Electric Power Supplies</subject><subject>Encoders-Decoders</subject><subject>Feature extraction</subject><subject>Humans</subject><subject>Image fusion</subject><subject>Image Processing, Computer-Assisted</subject><subject>Image quality</subject><subject>Image reconstruction</subject><subject>Information retrieval</subject><subject>Information Storage and Retrieval</subject><subject>Medical diagnosis</subject><subject>Medical diagnostic imaging</subject><subject>Medical imaging</subject><subject>Modules</subject><subject>self-adaptive weight fusion</subject><subject>self-reconstruction</subject><subject>Signal quality</subject><subject>transformer</subject><subject>Transformers</subject><subject>Transforms</subject><issn>2168-2194</issn><issn>2168-2208</issn><issn>2168-2208</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNpdkEtLAzEYRYMoWrQ_QBAJuHEzNY-ZJLNUUVuxuqm4DJnJFx2dR01mFP-9Ka0iZpMH516-HIQOKZlQSvKz24vpbMII4xPORKpovoVGjAqVMEbU9s-Z5ukeGofwSuJS8SkXu2iPy5ijgo7Q_LzFs2bpuw-wePpV-Mrie-g_O_-Gn6r-BRu88KYNrvMNeDzv7FADjjc8B1uVpo5p8wz4eghV1x6gHWfqAOPNvo8er68Wl9Pk7uFmdnl-l5Q8I30irALhMmEKJTNZykI5njMmqCpcmTFujZFAeQGGcmqlSq2ypctk5krnUu74Pjpd98bB3wcIvW6qUEJdmxa6IWgmcyWJkoJG9OQf-toNvo3TaaY4zYhiSkaKrqnSdyF4cHrpq8b4L02JXunWK916pVtvdMfM8aZ5KBqwv4kfuRE4WgMVAPwpJHn8LeffxROCLQ</recordid><startdate>20230701</startdate><enddate>20230701</enddate><creator>Liu, Yanyu</creator><creator>Zang, Yongsheng</creator><creator>Zhou, Dongming</creator><creator>Cao, Jinde</creator><creator>Nie, Rencan</creator><creator>Hou, Ruichao</creator><creator>Ding, Zhaisheng</creator><creator>Mei, Jiatian</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QQ</scope><scope>7SC</scope><scope>7SE</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JG9</scope><scope>JQ2</scope><scope>K9.</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>NAPCQ</scope><scope>P64</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0003-3133-7119</orcidid><orcidid>https://orcid.org/0000-0003-0568-1231</orcidid><orcidid>https://orcid.org/0000-0002-7355-0893</orcidid><orcidid>https://orcid.org/0000-0003-0139-9415</orcidid><orcidid>https://orcid.org/0000-0003-4180-485X</orcidid><orcidid>https://orcid.org/0000-0001-6056-7656</orcidid><orcidid>https://orcid.org/0000-0001-8111-7339</orcidid></search><sort><creationdate>20230701</creationdate><title>An Improved Hybrid Network With a Transformer Module for Medical Image Fusion</title><author>Liu, Yanyu ; Zang, Yongsheng ; Zhou, Dongming ; Cao, Jinde ; Nie, Rencan ; Hou, Ruichao ; Ding, Zhaisheng ; Mei, Jiatian</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c350t-6d8e6f56ab8757c7b8f3922618bfc523daa7e13bea131d784d8dcf575fcff43f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Coders</topic><topic>Computer vision</topic><topic>Datasets</topic><topic>Diagnosis, Computer-Assisted</topic><topic>Discrete wavelet transforms</topic><topic>Electric Power Supplies</topic><topic>Encoders-Decoders</topic><topic>Feature extraction</topic><topic>Humans</topic><topic>Image fusion</topic><topic>Image Processing, Computer-Assisted</topic><topic>Image quality</topic><topic>Image reconstruction</topic><topic>Information retrieval</topic><topic>Information Storage and Retrieval</topic><topic>Medical diagnosis</topic><topic>Medical diagnostic imaging</topic><topic>Medical imaging</topic><topic>Modules</topic><topic>self-adaptive weight fusion</topic><topic>self-reconstruction</topic><topic>Signal quality</topic><topic>transformer</topic><topic>Transformers</topic><topic>Transforms</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Liu, Yanyu</creatorcontrib><creatorcontrib>Zang, Yongsheng</creatorcontrib><creatorcontrib>Zhou, Dongming</creatorcontrib><creatorcontrib>Cao, Jinde</creatorcontrib><creatorcontrib>Nie, Rencan</creatorcontrib><creatorcontrib>Hou, Ruichao</creatorcontrib><creatorcontrib>Ding, Zhaisheng</creatorcontrib><creatorcontrib>Mei, Jiatian</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Xplore</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Nursing & Allied Health Premium</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE journal of biomedical and health informatics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Liu, Yanyu</au><au>Zang, Yongsheng</au><au>Zhou, Dongming</au><au>Cao, Jinde</au><au>Nie, Rencan</au><au>Hou, Ruichao</au><au>Ding, Zhaisheng</au><au>Mei, Jiatian</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>An Improved Hybrid Network With a Transformer Module for Medical Image Fusion</atitle><jtitle>IEEE journal of biomedical and health informatics</jtitle><stitle>JBHI</stitle><addtitle>IEEE J Biomed Health Inform</addtitle><date>2023-07-01</date><risdate>2023</risdate><volume>27</volume><issue>7</issue><spage>1</spage><epage>12</epage><pages>1-12</pages><issn>2168-2194</issn><issn>2168-2208</issn><eissn>2168-2208</eissn><coden>IJBHA9</coden><abstract>Medical image fusion technology is an essential component of computer-aided diagnosis, which aims to extract useful cross-modality cues from raw signals to generate high-quality fused images. Many advanced methods focus on designing fusion rules, but there is still room for improvement in cross-modal information extraction. To this end, we propose a novel encoder-decoder architecture with three technical novelties. First, we divide the medical images into two attributes, namely pixel intensity distribution attributes and texture attributes, and thus design two self-reconstruction tasks to mine as many specific features as possible. Second, we propose a hybrid network combining a CNN and a transformer module to model both long-range and short-range dependencies. Moreover, we construct a self-adaptive weight fusion rule that automatically measures salient features. Extensive experiments on a public medical image dataset and other multimodal datasets show that the proposed method achieves satisfactory performance.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>37023161</pmid><doi>10.1109/JBHI.2023.3264819</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0003-3133-7119</orcidid><orcidid>https://orcid.org/0000-0003-0568-1231</orcidid><orcidid>https://orcid.org/0000-0002-7355-0893</orcidid><orcidid>https://orcid.org/0000-0003-0139-9415</orcidid><orcidid>https://orcid.org/0000-0003-4180-485X</orcidid><orcidid>https://orcid.org/0000-0001-6056-7656</orcidid><orcidid>https://orcid.org/0000-0001-8111-7339</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2168-2194 |
ispartof | IEEE journal of biomedical and health informatics, 2023-07, Vol.27 (7), p.1-12 |
issn | 2168-2194 2168-2208 2168-2208 |
language | eng |
recordid | cdi_proquest_journals_2831508287 |
source | IEEE Electronic Library (IEL) Journals |
subjects | Coders Computer vision Datasets Diagnosis, Computer-Assisted Discrete wavelet transforms Electric Power Supplies Encoders-Decoders Feature extraction Humans Image fusion Image Processing, Computer-Assisted Image quality Image reconstruction Information retrieval Information Storage and Retrieval Medical diagnosis Medical diagnostic imaging Medical imaging Modules self-adaptive weight fusion self-reconstruction Signal quality transformer Transformers Transforms |
title | An Improved Hybrid Network With a Transformer Module for Medical Image Fusion |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T17%3A31%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=An%20Improved%20Hybrid%20Network%20With%20a%20Transformer%20Module%20for%20Medical%20Image%20Fusion&rft.jtitle=IEEE%20journal%20of%20biomedical%20and%20health%20informatics&rft.au=Liu,%20Yanyu&rft.date=2023-07-01&rft.volume=27&rft.issue=7&rft.spage=1&rft.epage=12&rft.pages=1-12&rft.issn=2168-2194&rft.eissn=2168-2208&rft.coden=IJBHA9&rft_id=info:doi/10.1109/JBHI.2023.3264819&rft_dat=%3Cproquest_ieee_%3E2831508287%3C/proquest_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c350t-6d8e6f56ab8757c7b8f3922618bfc523daa7e13bea131d784d8dcf575fcff43f3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2831508287&rft_id=info:pmid/37023161&rft_ieee_id=10093923&rfr_iscdi=true |