Loading…

Infrared and Visible Image Fusion Combining Interesting Region Detection and Nonsubsampled Contourlet Transform

The most fundamental purpose of infrared (IR) and visible (VI) image fusion is to integrate the useful information and produce a new image which has higher reliability and understandability for human or computer vision. In order to better preserve the interesting region and its corresponding detail...

Full description

Saved in:
Bibliographic Details
Published in:Journal of sensors 2018-01, Vol.2018 (2018), p.1-15
Main Authors: Nie, Rencan, Zhang, Xuejie, Zhou, Dongming, He, Kangjian
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c427t-b64be1f122b224aa5fe2a973c1a1796328f7fdb8ddf9567ffb9c84905328bfbd3
cites cdi_FETCH-LOGICAL-c427t-b64be1f122b224aa5fe2a973c1a1796328f7fdb8ddf9567ffb9c84905328bfbd3
container_end_page 15
container_issue 2018
container_start_page 1
container_title Journal of sensors
container_volume 2018
creator Nie, Rencan
Zhang, Xuejie
Zhou, Dongming
He, Kangjian
description The most fundamental purpose of infrared (IR) and visible (VI) image fusion is to integrate the useful information and produce a new image which has higher reliability and understandability for human or computer vision. In order to better preserve the interesting region and its corresponding detail information, a novel multiscale fusion scheme based on interesting region detection is proposed in this paper. Firstly, the MeanShift is used to detect the interesting region with the salient objects and the background region of IR and VI. Then the interesting regions are processed by the guided filter. Next, the nonsubsampled contourlet transform (NSCT) is used for background region decomposition of IR and VI to get a low-frequency and a series of high-frequency layers. An improved weighted average method based on per-pixel weighted average is used to fuse the low-frequency layer. The pulse-coupled neural network (PCNN) is used to fuse each high-frequency layer. Finally, the fused image is obtained by fusing the fused interesting region and the fused background region. Experimental results demonstrate that the proposed algorithm can integrate more background details as well as highlight the interesting region with the salient objects, which is superior to the conventional methods in objective quality evaluations and visual inspection.
doi_str_mv 10.1155/2018/5754702
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2025301765</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2025301765</sourcerecordid><originalsourceid>FETCH-LOGICAL-c427t-b64be1f122b224aa5fe2a973c1a1796328f7fdb8ddf9567ffb9c84905328bfbd3</originalsourceid><addsrcrecordid>eNqFkM1LxDAQxYMouK7ePEvBo9bNR9O0R6muFhYFWcVbSdpkzdIma9Ii_vemdNGjp3kwv3nzeACcI3iDEKULDFG2oIwmDOIDMENpxmKG0-zwV9P3Y3Di_RbClDBCZsCWRjnuZBNx00Rv2mvRyqjs-EZGy8Fra6LCdkIbbTZRaXrppO9H_SI34_JO9rLuRzXeP1njB-F5t2uDY2FNbwfXyj5aO268sq47BUeKt16e7eccvC7v18VjvHp-KIvbVVwnmPWxSBMhkUIYC4wTzqmSmOeM1IgjlqcEZ4qpRmRNo3KaMqVEXmdJDmnYCCUaMgeXk-_O2c8hZK62IYoJLysMMSUQsZQG6nqiame9d1JVO6c77r4rBKux0mqstNpXGvCrCf_QpuFf-j_6YqJlYKTif3TAaJ6QH_1CgYo</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2025301765</pqid></control><display><type>article</type><title>Infrared and Visible Image Fusion Combining Interesting Region Detection and Nonsubsampled Contourlet Transform</title><source>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</source><source>Wiley-Blackwell Titles (Open access)</source><creator>Nie, Rencan ; Zhang, Xuejie ; Zhou, Dongming ; He, Kangjian</creator><contributor>Oddo, Calogero M. ; Calogero M Oddo</contributor><creatorcontrib>Nie, Rencan ; Zhang, Xuejie ; Zhou, Dongming ; He, Kangjian ; Oddo, Calogero M. ; Calogero M Oddo</creatorcontrib><description>The most fundamental purpose of infrared (IR) and visible (VI) image fusion is to integrate the useful information and produce a new image which has higher reliability and understandability for human or computer vision. In order to better preserve the interesting region and its corresponding detail information, a novel multiscale fusion scheme based on interesting region detection is proposed in this paper. Firstly, the MeanShift is used to detect the interesting region with the salient objects and the background region of IR and VI. Then the interesting regions are processed by the guided filter. Next, the nonsubsampled contourlet transform (NSCT) is used for background region decomposition of IR and VI to get a low-frequency and a series of high-frequency layers. An improved weighted average method based on per-pixel weighted average is used to fuse the low-frequency layer. The pulse-coupled neural network (PCNN) is used to fuse each high-frequency layer. Finally, the fused image is obtained by fusing the fused interesting region and the fused background region. Experimental results demonstrate that the proposed algorithm can integrate more background details as well as highlight the interesting region with the salient objects, which is superior to the conventional methods in objective quality evaluations and visual inspection.</description><identifier>ISSN: 1687-725X</identifier><identifier>EISSN: 1687-7268</identifier><identifier>DOI: 10.1155/2018/5754702</identifier><language>eng</language><publisher>Cairo, Egypt: Hindawi Publishing Corporation</publisher><subject>Algorithms ; Computer vision ; Decomposition ; Defense industry ; Image detection ; Image processing ; Infrared imagery ; Localization ; Methods ; Neural networks ; Object recognition ; Principal components analysis ; Salience ; Signal processing</subject><ispartof>Journal of sensors, 2018-01, Vol.2018 (2018), p.1-15</ispartof><rights>Copyright © 2018 Kangjian He et al.</rights><rights>Copyright © 2018 Kangjian He et al.; This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c427t-b64be1f122b224aa5fe2a973c1a1796328f7fdb8ddf9567ffb9c84905328bfbd3</citedby><cites>FETCH-LOGICAL-c427t-b64be1f122b224aa5fe2a973c1a1796328f7fdb8ddf9567ffb9c84905328bfbd3</cites><orcidid>0000-0001-6207-9728 ; 0000-0003-0568-1231 ; 0000-0003-0139-9415 ; 0000-0002-6591-0916</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2025301765/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2025301765?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,25753,27924,27925,37012,44590,75126</link.rule.ids></links><search><contributor>Oddo, Calogero M.</contributor><contributor>Calogero M Oddo</contributor><creatorcontrib>Nie, Rencan</creatorcontrib><creatorcontrib>Zhang, Xuejie</creatorcontrib><creatorcontrib>Zhou, Dongming</creatorcontrib><creatorcontrib>He, Kangjian</creatorcontrib><title>Infrared and Visible Image Fusion Combining Interesting Region Detection and Nonsubsampled Contourlet Transform</title><title>Journal of sensors</title><description>The most fundamental purpose of infrared (IR) and visible (VI) image fusion is to integrate the useful information and produce a new image which has higher reliability and understandability for human or computer vision. In order to better preserve the interesting region and its corresponding detail information, a novel multiscale fusion scheme based on interesting region detection is proposed in this paper. Firstly, the MeanShift is used to detect the interesting region with the salient objects and the background region of IR and VI. Then the interesting regions are processed by the guided filter. Next, the nonsubsampled contourlet transform (NSCT) is used for background region decomposition of IR and VI to get a low-frequency and a series of high-frequency layers. An improved weighted average method based on per-pixel weighted average is used to fuse the low-frequency layer. The pulse-coupled neural network (PCNN) is used to fuse each high-frequency layer. Finally, the fused image is obtained by fusing the fused interesting region and the fused background region. Experimental results demonstrate that the proposed algorithm can integrate more background details as well as highlight the interesting region with the salient objects, which is superior to the conventional methods in objective quality evaluations and visual inspection.</description><subject>Algorithms</subject><subject>Computer vision</subject><subject>Decomposition</subject><subject>Defense industry</subject><subject>Image detection</subject><subject>Image processing</subject><subject>Infrared imagery</subject><subject>Localization</subject><subject>Methods</subject><subject>Neural networks</subject><subject>Object recognition</subject><subject>Principal components analysis</subject><subject>Salience</subject><subject>Signal processing</subject><issn>1687-725X</issn><issn>1687-7268</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqFkM1LxDAQxYMouK7ePEvBo9bNR9O0R6muFhYFWcVbSdpkzdIma9Ii_vemdNGjp3kwv3nzeACcI3iDEKULDFG2oIwmDOIDMENpxmKG0-zwV9P3Y3Di_RbClDBCZsCWRjnuZBNx00Rv2mvRyqjs-EZGy8Fra6LCdkIbbTZRaXrppO9H_SI34_JO9rLuRzXeP1njB-F5t2uDY2FNbwfXyj5aO268sq47BUeKt16e7eccvC7v18VjvHp-KIvbVVwnmPWxSBMhkUIYC4wTzqmSmOeM1IgjlqcEZ4qpRmRNo3KaMqVEXmdJDmnYCCUaMgeXk-_O2c8hZK62IYoJLysMMSUQsZQG6nqiame9d1JVO6c77r4rBKux0mqstNpXGvCrCf_QpuFf-j_6YqJlYKTif3TAaJ6QH_1CgYo</recordid><startdate>20180101</startdate><enddate>20180101</enddate><creator>Nie, Rencan</creator><creator>Zhang, Xuejie</creator><creator>Zhou, Dongming</creator><creator>He, Kangjian</creator><general>Hindawi Publishing Corporation</general><general>Hindawi</general><general>Hindawi Limited</general><scope>ADJCN</scope><scope>AHFXO</scope><scope>RHU</scope><scope>RHW</scope><scope>RHX</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SP</scope><scope>7U5</scope><scope>7XB</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>CWDGH</scope><scope>D1I</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>KB.</scope><scope>L6V</scope><scope>L7M</scope><scope>M0N</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PDBOC</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0001-6207-9728</orcidid><orcidid>https://orcid.org/0000-0003-0568-1231</orcidid><orcidid>https://orcid.org/0000-0003-0139-9415</orcidid><orcidid>https://orcid.org/0000-0002-6591-0916</orcidid></search><sort><creationdate>20180101</creationdate><title>Infrared and Visible Image Fusion Combining Interesting Region Detection and Nonsubsampled Contourlet Transform</title><author>Nie, Rencan ; Zhang, Xuejie ; Zhou, Dongming ; He, Kangjian</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c427t-b64be1f122b224aa5fe2a973c1a1796328f7fdb8ddf9567ffb9c84905328bfbd3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Algorithms</topic><topic>Computer vision</topic><topic>Decomposition</topic><topic>Defense industry</topic><topic>Image detection</topic><topic>Image processing</topic><topic>Infrared imagery</topic><topic>Localization</topic><topic>Methods</topic><topic>Neural networks</topic><topic>Object recognition</topic><topic>Principal components analysis</topic><topic>Salience</topic><topic>Signal processing</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Nie, Rencan</creatorcontrib><creatorcontrib>Zhang, Xuejie</creatorcontrib><creatorcontrib>Zhou, Dongming</creatorcontrib><creatorcontrib>He, Kangjian</creatorcontrib><collection>الدوريات العلمية والإحصائية - e-Marefa Academic and Statistical Periodicals</collection><collection>معرفة - المحتوى العربي الأكاديمي المتكامل - e-Marefa Academic Complete</collection><collection>Hindawi Publishing Complete</collection><collection>Hindawi Publishing Subscription Journals</collection><collection>Hindawi Publishing Open Access</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>Middle East &amp; Africa Database</collection><collection>ProQuest Materials Science Collection</collection><collection>ProQuest Central</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Materials Science Database</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computing Database</collection><collection>Engineering Database</collection><collection>ProQuest advanced technologies &amp; aerospace journals</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Materials science collection</collection><collection>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection><collection>ProQuest Central Basic</collection><jtitle>Journal of sensors</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Nie, Rencan</au><au>Zhang, Xuejie</au><au>Zhou, Dongming</au><au>He, Kangjian</au><au>Oddo, Calogero M.</au><au>Calogero M Oddo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Infrared and Visible Image Fusion Combining Interesting Region Detection and Nonsubsampled Contourlet Transform</atitle><jtitle>Journal of sensors</jtitle><date>2018-01-01</date><risdate>2018</risdate><volume>2018</volume><issue>2018</issue><spage>1</spage><epage>15</epage><pages>1-15</pages><issn>1687-725X</issn><eissn>1687-7268</eissn><abstract>The most fundamental purpose of infrared (IR) and visible (VI) image fusion is to integrate the useful information and produce a new image which has higher reliability and understandability for human or computer vision. In order to better preserve the interesting region and its corresponding detail information, a novel multiscale fusion scheme based on interesting region detection is proposed in this paper. Firstly, the MeanShift is used to detect the interesting region with the salient objects and the background region of IR and VI. Then the interesting regions are processed by the guided filter. Next, the nonsubsampled contourlet transform (NSCT) is used for background region decomposition of IR and VI to get a low-frequency and a series of high-frequency layers. An improved weighted average method based on per-pixel weighted average is used to fuse the low-frequency layer. The pulse-coupled neural network (PCNN) is used to fuse each high-frequency layer. Finally, the fused image is obtained by fusing the fused interesting region and the fused background region. Experimental results demonstrate that the proposed algorithm can integrate more background details as well as highlight the interesting region with the salient objects, which is superior to the conventional methods in objective quality evaluations and visual inspection.</abstract><cop>Cairo, Egypt</cop><pub>Hindawi Publishing Corporation</pub><doi>10.1155/2018/5754702</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0001-6207-9728</orcidid><orcidid>https://orcid.org/0000-0003-0568-1231</orcidid><orcidid>https://orcid.org/0000-0003-0139-9415</orcidid><orcidid>https://orcid.org/0000-0002-6591-0916</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1687-725X
ispartof Journal of sensors, 2018-01, Vol.2018 (2018), p.1-15
issn 1687-725X
1687-7268
language eng
recordid cdi_proquest_journals_2025301765
source Publicly Available Content Database (Proquest) (PQ_SDU_P3); Wiley-Blackwell Titles (Open access)
subjects Algorithms
Computer vision
Decomposition
Defense industry
Image detection
Image processing
Infrared imagery
Localization
Methods
Neural networks
Object recognition
Principal components analysis
Salience
Signal processing
title Infrared and Visible Image Fusion Combining Interesting Region Detection and Nonsubsampled Contourlet Transform
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-21T14%3A09%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Infrared%20and%20Visible%20Image%20Fusion%20Combining%20Interesting%20Region%20Detection%20and%20Nonsubsampled%20Contourlet%20Transform&rft.jtitle=Journal%20of%20sensors&rft.au=Nie,%20Rencan&rft.date=2018-01-01&rft.volume=2018&rft.issue=2018&rft.spage=1&rft.epage=15&rft.pages=1-15&rft.issn=1687-725X&rft.eissn=1687-7268&rft_id=info:doi/10.1155/2018/5754702&rft_dat=%3Cproquest_cross%3E2025301765%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c427t-b64be1f122b224aa5fe2a973c1a1796328f7fdb8ddf9567ffb9c84905328bfbd3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2025301765&rft_id=info:pmid/&rfr_iscdi=true