Loading…
DTFusion: Infrared and Visible Image Fusion Based on Dense Residual PConv-ConvNeXt and Texture-Contrast Compensation
Infrared and visible image fusion aims to produce an informative fused image for the same scene by integrating the complementary information from two source images. Most deep-learning-based fusion networks utilize small kernel-size convolution to extract features from a local receptive field or desi...
Saved in:
Published in: | Sensors (Basel, Switzerland) Switzerland), 2023-12, Vol.24 (1), p.203 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | cdi_FETCH-LOGICAL-c469t-3711bbf4dde156b6275ad3600317397adaf22e8c151a666d6c5c140cb4391ae83 |
container_end_page | |
container_issue | 1 |
container_start_page | 203 |
container_title | Sensors (Basel, Switzerland) |
container_volume | 24 |
creator | Zhou, Xinzhi He, Min Zhou, Dongming Xu, Feifei Jeon, Seunggil |
description | Infrared and visible image fusion aims to produce an informative fused image for the same scene by integrating the complementary information from two source images. Most deep-learning-based fusion networks utilize small kernel-size convolution to extract features from a local receptive field or design unlearnable fusion strategies to fuse features, which limits the feature representation capabilities and fusion performance of the network. Therefore, a novel end-to-end infrared and visible image fusion framework called DTFusion is proposed to address these problems. A residual PConv-ConvNeXt module (RPCM) and dense connections are introduced into the encoder network to efficiently extract features with larger receptive fields. In addition, a texture-contrast compensation module (TCCM) with gradient residuals and an attention mechanism is designed to compensate for the texture details and contrast of features. The fused features are reconstructed through four convolutional layers to generate a fused image with rich scene information. Experiments on public datasets show that DTFusion outperforms other state-of-the-art fusion methods in both subjective vision and objective metrics. |
doi_str_mv | 10.3390/s24010203 |
format | article |
fullrecord | <record><control><sourceid>gale_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_a00f3427bf8e4dc5afe5034d1221e23f</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A779351320</galeid><doaj_id>oai_doaj_org_article_a00f3427bf8e4dc5afe5034d1221e23f</doaj_id><sourcerecordid>A779351320</sourcerecordid><originalsourceid>FETCH-LOGICAL-c469t-3711bbf4dde156b6275ad3600317397adaf22e8c151a666d6c5c140cb4391ae83</originalsourceid><addsrcrecordid>eNpdksFu1DAQhiMEou3CgRdAkbjAYYvtcZyES1W2FFaqAKEFcbMce7x4lcSLnVTl7XGasmqRJXs0880_Hnuy7AUlpwA1eRsZJ5QwAo-yY8oZX1aMkcf37KPsJMYdIQwAqqfZEVQJJqI4zoaLzeUYne_f5eveBhXQ5Ko3-Q8XXdNivu7UFvMZyd-rmMLJuMA-Yv4NozOjavOvK99fL6ftM_4cbvM3eDOMASfnEFQc8pXv9ilLDUnoWfbEqjbi87tzkX2__LBZfVpeffm4Xp1fLTUX9bCEktKmsdwYpIVoBCsLZUAQArSEulRGWcaw0rSgSghhhC405UQ3HGqqsIJFtp51jVc7uQ-uU-GP9MrJW4cPW6nC4HSLUhFigbOysRVyowtlsSDADWWMIgObtM5mrf3YdGg0Tn21D0QfRnr3S279taSkrCgInhRe3ykE_3vEOMjORY1tq3r0Y5SspsB5JUSR0Ff_oTs_hj691USxsqRVAYk6namtSh243vpUWKdlsHPa92hd8p-XZQ0FhfTji-zNnKCDjzGgPVyfEjlNkjxMUmJf3u_3QP4bHfgLazHCHg</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2912771853</pqid></control><display><type>article</type><title>DTFusion: Infrared and Visible Image Fusion Based on Dense Residual PConv-ConvNeXt and Texture-Contrast Compensation</title><source>Publicly Available Content Database</source><source>PubMed Central</source><creator>Zhou, Xinzhi ; He, Min ; Zhou, Dongming ; Xu, Feifei ; Jeon, Seunggil</creator><creatorcontrib>Zhou, Xinzhi ; He, Min ; Zhou, Dongming ; Xu, Feifei ; Jeon, Seunggil</creatorcontrib><description>Infrared and visible image fusion aims to produce an informative fused image for the same scene by integrating the complementary information from two source images. Most deep-learning-based fusion networks utilize small kernel-size convolution to extract features from a local receptive field or design unlearnable fusion strategies to fuse features, which limits the feature representation capabilities and fusion performance of the network. Therefore, a novel end-to-end infrared and visible image fusion framework called DTFusion is proposed to address these problems. A residual PConv-ConvNeXt module (RPCM) and dense connections are introduced into the encoder network to efficiently extract features with larger receptive fields. In addition, a texture-contrast compensation module (TCCM) with gradient residuals and an attention mechanism is designed to compensate for the texture details and contrast of features. The fused features are reconstructed through four convolutional layers to generate a fused image with rich scene information. Experiments on public datasets show that DTFusion outperforms other state-of-the-art fusion methods in both subjective vision and objective metrics.</description><identifier>ISSN: 1424-8220</identifier><identifier>EISSN: 1424-8220</identifier><identifier>DOI: 10.3390/s24010203</identifier><identifier>PMID: 38203065</identifier><language>eng</language><publisher>Switzerland: MDPI AG</publisher><subject>attention mechanism ; Datasets ; Deep learning ; gradient residuals ; image fusion ; infrared and visible ; larger receptive fields ; Sensors</subject><ispartof>Sensors (Basel, Switzerland), 2023-12, Vol.24 (1), p.203</ispartof><rights>COPYRIGHT 2023 MDPI AG</rights><rights>2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>2023 by the authors. 2023</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c469t-3711bbf4dde156b6275ad3600317397adaf22e8c151a666d6c5c140cb4391ae83</cites><orcidid>0000-0003-0139-9415 ; 0000-0002-8135-4198</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2912771853/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2912771853?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>230,314,727,780,784,885,25751,27922,27923,37010,37011,44588,53789,53791,74896</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/38203065$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhou, Xinzhi</creatorcontrib><creatorcontrib>He, Min</creatorcontrib><creatorcontrib>Zhou, Dongming</creatorcontrib><creatorcontrib>Xu, Feifei</creatorcontrib><creatorcontrib>Jeon, Seunggil</creatorcontrib><title>DTFusion: Infrared and Visible Image Fusion Based on Dense Residual PConv-ConvNeXt and Texture-Contrast Compensation</title><title>Sensors (Basel, Switzerland)</title><addtitle>Sensors (Basel)</addtitle><description>Infrared and visible image fusion aims to produce an informative fused image for the same scene by integrating the complementary information from two source images. Most deep-learning-based fusion networks utilize small kernel-size convolution to extract features from a local receptive field or design unlearnable fusion strategies to fuse features, which limits the feature representation capabilities and fusion performance of the network. Therefore, a novel end-to-end infrared and visible image fusion framework called DTFusion is proposed to address these problems. A residual PConv-ConvNeXt module (RPCM) and dense connections are introduced into the encoder network to efficiently extract features with larger receptive fields. In addition, a texture-contrast compensation module (TCCM) with gradient residuals and an attention mechanism is designed to compensate for the texture details and contrast of features. The fused features are reconstructed through four convolutional layers to generate a fused image with rich scene information. Experiments on public datasets show that DTFusion outperforms other state-of-the-art fusion methods in both subjective vision and objective metrics.</description><subject>attention mechanism</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>gradient residuals</subject><subject>image fusion</subject><subject>infrared and visible</subject><subject>larger receptive fields</subject><subject>Sensors</subject><issn>1424-8220</issn><issn>1424-8220</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNpdksFu1DAQhiMEou3CgRdAkbjAYYvtcZyES1W2FFaqAKEFcbMce7x4lcSLnVTl7XGasmqRJXs0880_Hnuy7AUlpwA1eRsZJ5QwAo-yY8oZX1aMkcf37KPsJMYdIQwAqqfZEVQJJqI4zoaLzeUYne_f5eveBhXQ5Ko3-Q8XXdNivu7UFvMZyd-rmMLJuMA-Yv4NozOjavOvK99fL6ftM_4cbvM3eDOMASfnEFQc8pXv9ilLDUnoWfbEqjbi87tzkX2__LBZfVpeffm4Xp1fLTUX9bCEktKmsdwYpIVoBCsLZUAQArSEulRGWcaw0rSgSghhhC405UQ3HGqqsIJFtp51jVc7uQ-uU-GP9MrJW4cPW6nC4HSLUhFigbOysRVyowtlsSDADWWMIgObtM5mrf3YdGg0Tn21D0QfRnr3S279taSkrCgInhRe3ykE_3vEOMjORY1tq3r0Y5SspsB5JUSR0Ff_oTs_hj691USxsqRVAYk6namtSh243vpUWKdlsHPa92hd8p-XZQ0FhfTji-zNnKCDjzGgPVyfEjlNkjxMUmJf3u_3QP4bHfgLazHCHg</recordid><startdate>20231229</startdate><enddate>20231229</enddate><creator>Zhou, Xinzhi</creator><creator>He, Min</creator><creator>Zhou, Dongming</creator><creator>Xu, Feifei</creator><creator>Jeon, Seunggil</creator><general>MDPI AG</general><general>MDPI</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>K9.</scope><scope>M0S</scope><scope>M1P</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0003-0139-9415</orcidid><orcidid>https://orcid.org/0000-0002-8135-4198</orcidid></search><sort><creationdate>20231229</creationdate><title>DTFusion: Infrared and Visible Image Fusion Based on Dense Residual PConv-ConvNeXt and Texture-Contrast Compensation</title><author>Zhou, Xinzhi ; He, Min ; Zhou, Dongming ; Xu, Feifei ; Jeon, Seunggil</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c469t-3711bbf4dde156b6275ad3600317397adaf22e8c151a666d6c5c140cb4391ae83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>attention mechanism</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>gradient residuals</topic><topic>image fusion</topic><topic>infrared and visible</topic><topic>larger receptive fields</topic><topic>Sensors</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhou, Xinzhi</creatorcontrib><creatorcontrib>He, Min</creatorcontrib><creatorcontrib>Zhou, Dongming</creatorcontrib><creatorcontrib>Xu, Feifei</creatorcontrib><creatorcontrib>Jeon, Seunggil</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Health & Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>Health & Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Sensors (Basel, Switzerland)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhou, Xinzhi</au><au>He, Min</au><au>Zhou, Dongming</au><au>Xu, Feifei</au><au>Jeon, Seunggil</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>DTFusion: Infrared and Visible Image Fusion Based on Dense Residual PConv-ConvNeXt and Texture-Contrast Compensation</atitle><jtitle>Sensors (Basel, Switzerland)</jtitle><addtitle>Sensors (Basel)</addtitle><date>2023-12-29</date><risdate>2023</risdate><volume>24</volume><issue>1</issue><spage>203</spage><pages>203-</pages><issn>1424-8220</issn><eissn>1424-8220</eissn><abstract>Infrared and visible image fusion aims to produce an informative fused image for the same scene by integrating the complementary information from two source images. Most deep-learning-based fusion networks utilize small kernel-size convolution to extract features from a local receptive field or design unlearnable fusion strategies to fuse features, which limits the feature representation capabilities and fusion performance of the network. Therefore, a novel end-to-end infrared and visible image fusion framework called DTFusion is proposed to address these problems. A residual PConv-ConvNeXt module (RPCM) and dense connections are introduced into the encoder network to efficiently extract features with larger receptive fields. In addition, a texture-contrast compensation module (TCCM) with gradient residuals and an attention mechanism is designed to compensate for the texture details and contrast of features. The fused features are reconstructed through four convolutional layers to generate a fused image with rich scene information. Experiments on public datasets show that DTFusion outperforms other state-of-the-art fusion methods in both subjective vision and objective metrics.</abstract><cop>Switzerland</cop><pub>MDPI AG</pub><pmid>38203065</pmid><doi>10.3390/s24010203</doi><orcidid>https://orcid.org/0000-0003-0139-9415</orcidid><orcidid>https://orcid.org/0000-0002-8135-4198</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1424-8220 |
ispartof | Sensors (Basel, Switzerland), 2023-12, Vol.24 (1), p.203 |
issn | 1424-8220 1424-8220 |
language | eng |
recordid | cdi_doaj_primary_oai_doaj_org_article_a00f3427bf8e4dc5afe5034d1221e23f |
source | Publicly Available Content Database; PubMed Central |
subjects | attention mechanism Datasets Deep learning gradient residuals image fusion infrared and visible larger receptive fields Sensors |
title | DTFusion: Infrared and Visible Image Fusion Based on Dense Residual PConv-ConvNeXt and Texture-Contrast Compensation |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T10%3A30%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=DTFusion:%20Infrared%20and%20Visible%20Image%20Fusion%20Based%20on%20Dense%20Residual%20PConv-ConvNeXt%20and%20Texture-Contrast%20Compensation&rft.jtitle=Sensors%20(Basel,%20Switzerland)&rft.au=Zhou,%20Xinzhi&rft.date=2023-12-29&rft.volume=24&rft.issue=1&rft.spage=203&rft.pages=203-&rft.issn=1424-8220&rft.eissn=1424-8220&rft_id=info:doi/10.3390/s24010203&rft_dat=%3Cgale_doaj_%3EA779351320%3C/gale_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c469t-3711bbf4dde156b6275ad3600317397adaf22e8c151a666d6c5c140cb4391ae83%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2912771853&rft_id=info:pmid/38203065&rft_galeid=A779351320&rfr_iscdi=true |