Loading…
Infrared–Visible Image Fusion through Feature-Based Decomposition and Domain Normalization
Infrared–visible image fusion is valuable across various applications due to the complementary information that it provides. However, the current fusion methods face challenges in achieving high-quality fused images. This paper identifies a limitation in the existing fusion framework that affects th...
Saved in:
Published in: | Remote sensing (Basel, Switzerland) Switzerland), 2024-03, Vol.16 (6), p.969 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | cdi_FETCH-LOGICAL-c359t-b57bd8c7f427c1c145a16b873a5e15a05b259b89c9ab1cd9df1c19b3605c31563 |
container_end_page | |
container_issue | 6 |
container_start_page | 969 |
container_title | Remote sensing (Basel, Switzerland) |
container_volume | 16 |
creator | Chen, Weiyi Miao, Lingjuan Wang, Yuhao Zhou, Zhiqiang Qiao, Yajun |
description | Infrared–visible image fusion is valuable across various applications due to the complementary information that it provides. However, the current fusion methods face challenges in achieving high-quality fused images. This paper identifies a limitation in the existing fusion framework that affects the fusion quality: modal differences between infrared and visible images are often overlooked, resulting in the poor fusion of the two modalities. This limitation implies that features from different sources may not be consistently fused, which can impact the quality of the fusion results. Therefore, we propose a framework that utilizes feature-based decomposition and domain normalization. This decomposition method separates infrared and visible images into common and unique regions. To reduce modal differences while retaining unique information from the source images, we apply domain normalization to the common regions within the unified feature space. This space can transform infrared features into a pseudo-visible domain, ensuring that all features are fused within the same domain and minimizing the impact of modal differences during the fusion process. Noise in the source images adversely affects the fused images, compromising the overall fusion performance. Thus, we propose the non-local Gaussian filter. This filter can learn the shape and parameters of its filtering kernel based on the image features, effectively removing noise while preserving details. Additionally, we propose a novel dense attention in the feature extraction module, enabling the network to understand and leverage inter-layer information. Our experiments demonstrate a marked improvement in fusion quality with our proposed method. |
doi_str_mv | 10.3390/rs16060969 |
format | article |
fullrecord | <record><control><sourceid>gale_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_7f0f155224f545eeb5e2e2e2972a04c3</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A788253451</galeid><doaj_id>oai_doaj_org_article_7f0f155224f545eeb5e2e2e2972a04c3</doaj_id><sourcerecordid>A788253451</sourcerecordid><originalsourceid>FETCH-LOGICAL-c359t-b57bd8c7f427c1c145a16b873a5e15a05b259b89c9ab1cd9df1c19b3605c31563</originalsourceid><addsrcrecordid>eNpNUU1P3DAQjSqQiuhe-gsi9VYp1J9JfKRbtl0JwQU4IVljZ7x4tYm3dnKAE_-Bf8gvqdOtKDOHGb1572k0UxSfKTnjXJFvMdGa1ETV6kNxwkjDKsEUO3rXfywWKW1JDs6pIuKkuF8PLkLE7vX55c4nb3ZYrnvYYLmakg9DOT7EMG0eyhXCOEWsvkPCrvyBNvT7kPw4c2DISOjBD-VViD3s_BPMg0_FsYNdwsW_elrcri5ulr-qy-uf6-X5ZWW5VGNlZGO61jZOsMZSS4UEWpu24SCRSiDSMKlMq6wCQ22nOpdJyvCaSMuprPlpsT74dgG2eh99D_FRB_D6LxDiRkMcvd2hbhxxVErGhJNCIhqJbE7VMCDC8uz15eC1j-H3hGnU2zDFIa-veT6boEKRmXV2YG0gm_rBhTGCzdlh720Y0PmMnzdtyyQXkmbB14PAxpBSRPe2JiV6_p7-_z3-B8XQjEw</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3003414903</pqid></control><display><type>article</type><title>Infrared–Visible Image Fusion through Feature-Based Decomposition and Domain Normalization</title><source>Publicly Available Content (ProQuest)</source><creator>Chen, Weiyi ; Miao, Lingjuan ; Wang, Yuhao ; Zhou, Zhiqiang ; Qiao, Yajun</creator><creatorcontrib>Chen, Weiyi ; Miao, Lingjuan ; Wang, Yuhao ; Zhou, Zhiqiang ; Qiao, Yajun</creatorcontrib><description>Infrared–visible image fusion is valuable across various applications due to the complementary information that it provides. However, the current fusion methods face challenges in achieving high-quality fused images. This paper identifies a limitation in the existing fusion framework that affects the fusion quality: modal differences between infrared and visible images are often overlooked, resulting in the poor fusion of the two modalities. This limitation implies that features from different sources may not be consistently fused, which can impact the quality of the fusion results. Therefore, we propose a framework that utilizes feature-based decomposition and domain normalization. This decomposition method separates infrared and visible images into common and unique regions. To reduce modal differences while retaining unique information from the source images, we apply domain normalization to the common regions within the unified feature space. This space can transform infrared features into a pseudo-visible domain, ensuring that all features are fused within the same domain and minimizing the impact of modal differences during the fusion process. Noise in the source images adversely affects the fused images, compromising the overall fusion performance. Thus, we propose the non-local Gaussian filter. This filter can learn the shape and parameters of its filtering kernel based on the image features, effectively removing noise while preserving details. Additionally, we propose a novel dense attention in the feature extraction module, enabling the network to understand and leverage inter-layer information. Our experiments demonstrate a marked improvement in fusion quality with our proposed method.</description><identifier>ISSN: 2072-4292</identifier><identifier>EISSN: 2072-4292</identifier><identifier>DOI: 10.3390/rs16060969</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Algorithms ; Computer vision ; Decomposition ; Deep learning ; dense attention ; dynamic instance normalization ; Feature decomposition ; Feature extraction ; Image quality ; infrared and visible image fusion ; Infrared imagery ; Methods ; non-local Gaussian filter ; R&D ; Radiation ; Research & development ; unified feature space</subject><ispartof>Remote sensing (Basel, Switzerland), 2024-03, Vol.16 (6), p.969</ispartof><rights>COPYRIGHT 2024 MDPI AG</rights><rights>2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c359t-b57bd8c7f427c1c145a16b873a5e15a05b259b89c9ab1cd9df1c19b3605c31563</cites><orcidid>0009-0001-5633-7927 ; 0000-0003-1782-4535 ; 0000-0001-6871-8236 ; 0009-0007-8120-9146</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/3003414903/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/3003414903?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,25753,27924,27925,37012,44590,75126</link.rule.ids></links><search><creatorcontrib>Chen, Weiyi</creatorcontrib><creatorcontrib>Miao, Lingjuan</creatorcontrib><creatorcontrib>Wang, Yuhao</creatorcontrib><creatorcontrib>Zhou, Zhiqiang</creatorcontrib><creatorcontrib>Qiao, Yajun</creatorcontrib><title>Infrared–Visible Image Fusion through Feature-Based Decomposition and Domain Normalization</title><title>Remote sensing (Basel, Switzerland)</title><description>Infrared–visible image fusion is valuable across various applications due to the complementary information that it provides. However, the current fusion methods face challenges in achieving high-quality fused images. This paper identifies a limitation in the existing fusion framework that affects the fusion quality: modal differences between infrared and visible images are often overlooked, resulting in the poor fusion of the two modalities. This limitation implies that features from different sources may not be consistently fused, which can impact the quality of the fusion results. Therefore, we propose a framework that utilizes feature-based decomposition and domain normalization. This decomposition method separates infrared and visible images into common and unique regions. To reduce modal differences while retaining unique information from the source images, we apply domain normalization to the common regions within the unified feature space. This space can transform infrared features into a pseudo-visible domain, ensuring that all features are fused within the same domain and minimizing the impact of modal differences during the fusion process. Noise in the source images adversely affects the fused images, compromising the overall fusion performance. Thus, we propose the non-local Gaussian filter. This filter can learn the shape and parameters of its filtering kernel based on the image features, effectively removing noise while preserving details. Additionally, we propose a novel dense attention in the feature extraction module, enabling the network to understand and leverage inter-layer information. Our experiments demonstrate a marked improvement in fusion quality with our proposed method.</description><subject>Algorithms</subject><subject>Computer vision</subject><subject>Decomposition</subject><subject>Deep learning</subject><subject>dense attention</subject><subject>dynamic instance normalization</subject><subject>Feature decomposition</subject><subject>Feature extraction</subject><subject>Image quality</subject><subject>infrared and visible image fusion</subject><subject>Infrared imagery</subject><subject>Methods</subject><subject>non-local Gaussian filter</subject><subject>R&D</subject><subject>Radiation</subject><subject>Research & development</subject><subject>unified feature space</subject><issn>2072-4292</issn><issn>2072-4292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNpNUU1P3DAQjSqQiuhe-gsi9VYp1J9JfKRbtl0JwQU4IVljZ7x4tYm3dnKAE_-Bf8gvqdOtKDOHGb1572k0UxSfKTnjXJFvMdGa1ETV6kNxwkjDKsEUO3rXfywWKW1JDs6pIuKkuF8PLkLE7vX55c4nb3ZYrnvYYLmakg9DOT7EMG0eyhXCOEWsvkPCrvyBNvT7kPw4c2DISOjBD-VViD3s_BPMg0_FsYNdwsW_elrcri5ulr-qy-uf6-X5ZWW5VGNlZGO61jZOsMZSS4UEWpu24SCRSiDSMKlMq6wCQ22nOpdJyvCaSMuprPlpsT74dgG2eh99D_FRB_D6LxDiRkMcvd2hbhxxVErGhJNCIhqJbE7VMCDC8uz15eC1j-H3hGnU2zDFIa-veT6boEKRmXV2YG0gm_rBhTGCzdlh720Y0PmMnzdtyyQXkmbB14PAxpBSRPe2JiV6_p7-_z3-B8XQjEw</recordid><startdate>20240301</startdate><enddate>20240301</enddate><creator>Chen, Weiyi</creator><creator>Miao, Lingjuan</creator><creator>Wang, Yuhao</creator><creator>Zhou, Zhiqiang</creator><creator>Qiao, Yajun</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QQ</scope><scope>7QR</scope><scope>7SC</scope><scope>7SE</scope><scope>7SN</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BHPHI</scope><scope>BKSAR</scope><scope>C1K</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>H8G</scope><scope>HCIFZ</scope><scope>JG9</scope><scope>JQ2</scope><scope>KR7</scope><scope>L6V</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>P64</scope><scope>PCBAR</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>DOA</scope><orcidid>https://orcid.org/0009-0001-5633-7927</orcidid><orcidid>https://orcid.org/0000-0003-1782-4535</orcidid><orcidid>https://orcid.org/0000-0001-6871-8236</orcidid><orcidid>https://orcid.org/0009-0007-8120-9146</orcidid></search><sort><creationdate>20240301</creationdate><title>Infrared–Visible Image Fusion through Feature-Based Decomposition and Domain Normalization</title><author>Chen, Weiyi ; Miao, Lingjuan ; Wang, Yuhao ; Zhou, Zhiqiang ; Qiao, Yajun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c359t-b57bd8c7f427c1c145a16b873a5e15a05b259b89c9ab1cd9df1c19b3605c31563</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Algorithms</topic><topic>Computer vision</topic><topic>Decomposition</topic><topic>Deep learning</topic><topic>dense attention</topic><topic>dynamic instance normalization</topic><topic>Feature decomposition</topic><topic>Feature extraction</topic><topic>Image quality</topic><topic>infrared and visible image fusion</topic><topic>Infrared imagery</topic><topic>Methods</topic><topic>non-local Gaussian filter</topic><topic>R&D</topic><topic>Radiation</topic><topic>Research & development</topic><topic>unified feature space</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chen, Weiyi</creatorcontrib><creatorcontrib>Miao, Lingjuan</creatorcontrib><creatorcontrib>Wang, Yuhao</creatorcontrib><creatorcontrib>Zhou, Zhiqiang</creatorcontrib><creatorcontrib>Qiao, Yajun</creatorcontrib><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Chemoreception Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Ecology Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Earth, Atmospheric & Aquatic Science Collection</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Copper Technical Reference Library</collection><collection>SciTech Premium Collection</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Engineering Database</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>Earth, Atmospheric & Aquatic Science Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Remote sensing (Basel, Switzerland)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chen, Weiyi</au><au>Miao, Lingjuan</au><au>Wang, Yuhao</au><au>Zhou, Zhiqiang</au><au>Qiao, Yajun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Infrared–Visible Image Fusion through Feature-Based Decomposition and Domain Normalization</atitle><jtitle>Remote sensing (Basel, Switzerland)</jtitle><date>2024-03-01</date><risdate>2024</risdate><volume>16</volume><issue>6</issue><spage>969</spage><pages>969-</pages><issn>2072-4292</issn><eissn>2072-4292</eissn><abstract>Infrared–visible image fusion is valuable across various applications due to the complementary information that it provides. However, the current fusion methods face challenges in achieving high-quality fused images. This paper identifies a limitation in the existing fusion framework that affects the fusion quality: modal differences between infrared and visible images are often overlooked, resulting in the poor fusion of the two modalities. This limitation implies that features from different sources may not be consistently fused, which can impact the quality of the fusion results. Therefore, we propose a framework that utilizes feature-based decomposition and domain normalization. This decomposition method separates infrared and visible images into common and unique regions. To reduce modal differences while retaining unique information from the source images, we apply domain normalization to the common regions within the unified feature space. This space can transform infrared features into a pseudo-visible domain, ensuring that all features are fused within the same domain and minimizing the impact of modal differences during the fusion process. Noise in the source images adversely affects the fused images, compromising the overall fusion performance. Thus, we propose the non-local Gaussian filter. This filter can learn the shape and parameters of its filtering kernel based on the image features, effectively removing noise while preserving details. Additionally, we propose a novel dense attention in the feature extraction module, enabling the network to understand and leverage inter-layer information. Our experiments demonstrate a marked improvement in fusion quality with our proposed method.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/rs16060969</doi><orcidid>https://orcid.org/0009-0001-5633-7927</orcidid><orcidid>https://orcid.org/0000-0003-1782-4535</orcidid><orcidid>https://orcid.org/0000-0001-6871-8236</orcidid><orcidid>https://orcid.org/0009-0007-8120-9146</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2072-4292 |
ispartof | Remote sensing (Basel, Switzerland), 2024-03, Vol.16 (6), p.969 |
issn | 2072-4292 2072-4292 |
language | eng |
recordid | cdi_doaj_primary_oai_doaj_org_article_7f0f155224f545eeb5e2e2e2972a04c3 |
source | Publicly Available Content (ProQuest) |
subjects | Algorithms Computer vision Decomposition Deep learning dense attention dynamic instance normalization Feature decomposition Feature extraction Image quality infrared and visible image fusion Infrared imagery Methods non-local Gaussian filter R&D Radiation Research & development unified feature space |
title | Infrared–Visible Image Fusion through Feature-Based Decomposition and Domain Normalization |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T21%3A37%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Infrared%E2%80%93Visible%20Image%20Fusion%20through%20Feature-Based%20Decomposition%20and%20Domain%20Normalization&rft.jtitle=Remote%20sensing%20(Basel,%20Switzerland)&rft.au=Chen,%20Weiyi&rft.date=2024-03-01&rft.volume=16&rft.issue=6&rft.spage=969&rft.pages=969-&rft.issn=2072-4292&rft.eissn=2072-4292&rft_id=info:doi/10.3390/rs16060969&rft_dat=%3Cgale_doaj_%3EA788253451%3C/gale_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c359t-b57bd8c7f427c1c145a16b873a5e15a05b259b89c9ab1cd9df1c19b3605c31563%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3003414903&rft_id=info:pmid/&rft_galeid=A788253451&rfr_iscdi=true |