Loading…
Deep multi-scale network for single image dehazing with self-guided maps
A self-guided map, which is obtained from an input hazy image, is useful information as haze removal guidance. The existing end-to-end multi-scale networks tend to recover under-dehazed results due to the lack of a self-guided map. To solve this problem, we propose a deep multi-scale network with se...
Saved in:
Published in: | Signal, image and video processing image and video processing, 2023-09, Vol.17 (6), p.2867-2875 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | cdi_FETCH-LOGICAL-c270t-4746b68644645011397bf69f7f2f33c9fb91582850939ae52bec8c6b1b8879183 |
container_end_page | 2875 |
container_issue | 6 |
container_start_page | 2867 |
container_title | Signal, image and video processing |
container_volume | 17 |
creator | Liu, Jianlei Yu, Hao Zhang, Zhongzheng Chen, Chen Hou, Qianwen |
description | A self-guided map, which is obtained from an input hazy image, is useful information as haze removal guidance. The existing end-to-end multi-scale networks tend to recover under-dehazed results due to the lack of a self-guided map. To solve this problem, we propose a deep multi-scale network with self-guided maps for image dehazing, which consists of a pre-processor module and a deep multi-scale network (DMSN). The pre-processor module consists of a pre-dehazer based on the dark channel prior and a pre-dehazer based on gamma-correction, which can generate effective self-guided maps. We concatenate self-guided maps and the hazy image as the DMSN input. Based on the encoder-decoder structure, the DMSN improves feature representation by a new feature extraction block on each scale. The proposed method is experimentally evaluated in detail, and qualitative as well as quantitative analyses are performed. The experimental results show that the proposed algorithm performs favorable against state-of-the-art methods on the widely used dehazing benchmark datasets as well as real-world hazy images. |
doi_str_mv | 10.1007/s11760-023-02505-2 |
format | article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2826803548</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2826803548</sourcerecordid><originalsourceid>FETCH-LOGICAL-c270t-4746b68644645011397bf69f7f2f33c9fb91582850939ae52bec8c6b1b8879183</originalsourceid><addsrcrecordid>eNp9UE1Lw0AQXUTBUvsHPC14Xt2P7NdR6keFghc9L0k6m6amSdxNKPrr3RrRmwPDDI_33jAPoUtGrxml-iYyphUllIvUkkrCT9CMGSUI04yd_u5UnKNFjDuaSnBtlJmh1R1Aj_djM9QklnkDuIXh0IU37LuAY91WCar3eQV4A9v8MwH4UA9bHKHxpBrrDWzwPu_jBTrzeRNh8TPn6PXh_mW5Iuvnx6fl7ZqUXNOBZDpThTIqy1QmKWPC6sIr67XnXojS-sIyabiR1Aqbg-QFlKZUBSuM0ZYZMUdXk28fuvcR4uB23RjadNJxw5WhQmZHFp9YZehiDOBdH9IX4cMx6o6huSk0l0Jz36E5nkRiEsVEbisIf9b_qL4AoWZtFQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2826803548</pqid></control><display><type>article</type><title>Deep multi-scale network for single image dehazing with self-guided maps</title><source>Springer Link</source><creator>Liu, Jianlei ; Yu, Hao ; Zhang, Zhongzheng ; Chen, Chen ; Hou, Qianwen</creator><creatorcontrib>Liu, Jianlei ; Yu, Hao ; Zhang, Zhongzheng ; Chen, Chen ; Hou, Qianwen</creatorcontrib><description>A self-guided map, which is obtained from an input hazy image, is useful information as haze removal guidance. The existing end-to-end multi-scale networks tend to recover under-dehazed results due to the lack of a self-guided map. To solve this problem, we propose a deep multi-scale network with self-guided maps for image dehazing, which consists of a pre-processor module and a deep multi-scale network (DMSN). The pre-processor module consists of a pre-dehazer based on the dark channel prior and a pre-dehazer based on gamma-correction, which can generate effective self-guided maps. We concatenate self-guided maps and the hazy image as the DMSN input. Based on the encoder-decoder structure, the DMSN improves feature representation by a new feature extraction block on each scale. The proposed method is experimentally evaluated in detail, and qualitative as well as quantitative analyses are performed. The experimental results show that the proposed algorithm performs favorable against state-of-the-art methods on the widely used dehazing benchmark datasets as well as real-world hazy images.</description><identifier>ISSN: 1863-1703</identifier><identifier>EISSN: 1863-1711</identifier><identifier>DOI: 10.1007/s11760-023-02505-2</identifier><language>eng</language><publisher>London: Springer London</publisher><subject>Algorithms ; Coders ; Computer Imaging ; Computer Science ; Encoders-Decoders ; Feature extraction ; Image Processing and Computer Vision ; Microprocessors ; Modules ; Multimedia Information Systems ; Original Paper ; Pattern Recognition and Graphics ; Qualitative analysis ; Signal,Image and Speech Processing ; Vision</subject><ispartof>Signal, image and video processing, 2023-09, Vol.17 (6), p.2867-2875</ispartof><rights>The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c270t-4746b68644645011397bf69f7f2f33c9fb91582850939ae52bec8c6b1b8879183</cites><orcidid>0000-0003-3595-5352 ; 0000-0001-9353-4527 ; 0000-0002-9528-9155 ; 0000-0002-3414-1915 ; 0000-0003-3345-2436</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,777,781,27905,27906</link.rule.ids></links><search><creatorcontrib>Liu, Jianlei</creatorcontrib><creatorcontrib>Yu, Hao</creatorcontrib><creatorcontrib>Zhang, Zhongzheng</creatorcontrib><creatorcontrib>Chen, Chen</creatorcontrib><creatorcontrib>Hou, Qianwen</creatorcontrib><title>Deep multi-scale network for single image dehazing with self-guided maps</title><title>Signal, image and video processing</title><addtitle>SIViP</addtitle><description>A self-guided map, which is obtained from an input hazy image, is useful information as haze removal guidance. The existing end-to-end multi-scale networks tend to recover under-dehazed results due to the lack of a self-guided map. To solve this problem, we propose a deep multi-scale network with self-guided maps for image dehazing, which consists of a pre-processor module and a deep multi-scale network (DMSN). The pre-processor module consists of a pre-dehazer based on the dark channel prior and a pre-dehazer based on gamma-correction, which can generate effective self-guided maps. We concatenate self-guided maps and the hazy image as the DMSN input. Based on the encoder-decoder structure, the DMSN improves feature representation by a new feature extraction block on each scale. The proposed method is experimentally evaluated in detail, and qualitative as well as quantitative analyses are performed. The experimental results show that the proposed algorithm performs favorable against state-of-the-art methods on the widely used dehazing benchmark datasets as well as real-world hazy images.</description><subject>Algorithms</subject><subject>Coders</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Encoders-Decoders</subject><subject>Feature extraction</subject><subject>Image Processing and Computer Vision</subject><subject>Microprocessors</subject><subject>Modules</subject><subject>Multimedia Information Systems</subject><subject>Original Paper</subject><subject>Pattern Recognition and Graphics</subject><subject>Qualitative analysis</subject><subject>Signal,Image and Speech Processing</subject><subject>Vision</subject><issn>1863-1703</issn><issn>1863-1711</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp9UE1Lw0AQXUTBUvsHPC14Xt2P7NdR6keFghc9L0k6m6amSdxNKPrr3RrRmwPDDI_33jAPoUtGrxml-iYyphUllIvUkkrCT9CMGSUI04yd_u5UnKNFjDuaSnBtlJmh1R1Aj_djM9QklnkDuIXh0IU37LuAY91WCar3eQV4A9v8MwH4UA9bHKHxpBrrDWzwPu_jBTrzeRNh8TPn6PXh_mW5Iuvnx6fl7ZqUXNOBZDpThTIqy1QmKWPC6sIr67XnXojS-sIyabiR1Aqbg-QFlKZUBSuM0ZYZMUdXk28fuvcR4uB23RjadNJxw5WhQmZHFp9YZehiDOBdH9IX4cMx6o6huSk0l0Jz36E5nkRiEsVEbisIf9b_qL4AoWZtFQ</recordid><startdate>20230901</startdate><enddate>20230901</enddate><creator>Liu, Jianlei</creator><creator>Yu, Hao</creator><creator>Zhang, Zhongzheng</creator><creator>Chen, Chen</creator><creator>Hou, Qianwen</creator><general>Springer London</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0003-3595-5352</orcidid><orcidid>https://orcid.org/0000-0001-9353-4527</orcidid><orcidid>https://orcid.org/0000-0002-9528-9155</orcidid><orcidid>https://orcid.org/0000-0002-3414-1915</orcidid><orcidid>https://orcid.org/0000-0003-3345-2436</orcidid></search><sort><creationdate>20230901</creationdate><title>Deep multi-scale network for single image dehazing with self-guided maps</title><author>Liu, Jianlei ; Yu, Hao ; Zhang, Zhongzheng ; Chen, Chen ; Hou, Qianwen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c270t-4746b68644645011397bf69f7f2f33c9fb91582850939ae52bec8c6b1b8879183</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Coders</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Encoders-Decoders</topic><topic>Feature extraction</topic><topic>Image Processing and Computer Vision</topic><topic>Microprocessors</topic><topic>Modules</topic><topic>Multimedia Information Systems</topic><topic>Original Paper</topic><topic>Pattern Recognition and Graphics</topic><topic>Qualitative analysis</topic><topic>Signal,Image and Speech Processing</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Liu, Jianlei</creatorcontrib><creatorcontrib>Yu, Hao</creatorcontrib><creatorcontrib>Zhang, Zhongzheng</creatorcontrib><creatorcontrib>Chen, Chen</creatorcontrib><creatorcontrib>Hou, Qianwen</creatorcontrib><collection>CrossRef</collection><jtitle>Signal, image and video processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Liu, Jianlei</au><au>Yu, Hao</au><au>Zhang, Zhongzheng</au><au>Chen, Chen</au><au>Hou, Qianwen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep multi-scale network for single image dehazing with self-guided maps</atitle><jtitle>Signal, image and video processing</jtitle><stitle>SIViP</stitle><date>2023-09-01</date><risdate>2023</risdate><volume>17</volume><issue>6</issue><spage>2867</spage><epage>2875</epage><pages>2867-2875</pages><issn>1863-1703</issn><eissn>1863-1711</eissn><abstract>A self-guided map, which is obtained from an input hazy image, is useful information as haze removal guidance. The existing end-to-end multi-scale networks tend to recover under-dehazed results due to the lack of a self-guided map. To solve this problem, we propose a deep multi-scale network with self-guided maps for image dehazing, which consists of a pre-processor module and a deep multi-scale network (DMSN). The pre-processor module consists of a pre-dehazer based on the dark channel prior and a pre-dehazer based on gamma-correction, which can generate effective self-guided maps. We concatenate self-guided maps and the hazy image as the DMSN input. Based on the encoder-decoder structure, the DMSN improves feature representation by a new feature extraction block on each scale. The proposed method is experimentally evaluated in detail, and qualitative as well as quantitative analyses are performed. The experimental results show that the proposed algorithm performs favorable against state-of-the-art methods on the widely used dehazing benchmark datasets as well as real-world hazy images.</abstract><cop>London</cop><pub>Springer London</pub><doi>10.1007/s11760-023-02505-2</doi><tpages>9</tpages><orcidid>https://orcid.org/0000-0003-3595-5352</orcidid><orcidid>https://orcid.org/0000-0001-9353-4527</orcidid><orcidid>https://orcid.org/0000-0002-9528-9155</orcidid><orcidid>https://orcid.org/0000-0002-3414-1915</orcidid><orcidid>https://orcid.org/0000-0003-3345-2436</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1863-1703 |
ispartof | Signal, image and video processing, 2023-09, Vol.17 (6), p.2867-2875 |
issn | 1863-1703 1863-1711 |
language | eng |
recordid | cdi_proquest_journals_2826803548 |
source | Springer Link |
subjects | Algorithms Coders Computer Imaging Computer Science Encoders-Decoders Feature extraction Image Processing and Computer Vision Microprocessors Modules Multimedia Information Systems Original Paper Pattern Recognition and Graphics Qualitative analysis Signal,Image and Speech Processing Vision |
title | Deep multi-scale network for single image dehazing with self-guided maps |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T16%3A27%3A13IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20multi-scale%20network%20for%20single%20image%20dehazing%20with%20self-guided%20maps&rft.jtitle=Signal,%20image%20and%20video%20processing&rft.au=Liu,%20Jianlei&rft.date=2023-09-01&rft.volume=17&rft.issue=6&rft.spage=2867&rft.epage=2875&rft.pages=2867-2875&rft.issn=1863-1703&rft.eissn=1863-1711&rft_id=info:doi/10.1007/s11760-023-02505-2&rft_dat=%3Cproquest_cross%3E2826803548%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c270t-4746b68644645011397bf69f7f2f33c9fb91582850939ae52bec8c6b1b8879183%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2826803548&rft_id=info:pmid/&rfr_iscdi=true |