Loading…

Universal Dehazing via Haze Style Transfer

Single image dehazing has been actively studied to overcome the quality degradation of hazy images. Most of the existing methods take model-based approaches and the existing learning-based methods usually target specific haze styles only, e.g., daytime, varicolored, and nighttime haze. Therefore, th...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on circuits and systems for video technology 2024-09, Vol.34 (9), p.8576-8588
Main Authors: Park, Eunpil, Yoo, Jaejun, Sim, Jae-Young
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c162t-f67ec032cd7996e28f85cac76321623967f967b97bd24698c49011e99656fbaa3
container_end_page 8588
container_issue 9
container_start_page 8576
container_title IEEE transactions on circuits and systems for video technology
container_volume 34
creator Park, Eunpil
Yoo, Jaejun
Sim, Jae-Young
description Single image dehazing has been actively studied to overcome the quality degradation of hazy images. Most of the existing methods take model-based approaches and the existing learning-based methods usually target specific haze styles only, e.g., daytime, varicolored, and nighttime haze. Therefore, they suffer from the limited performance on arbitrary hazy images with diverse characteristics due to the lack of universal training dataset. In this paper, we first propose a fully data-driven learning-based framework for universal dehazing based on the haze style transfer (HST). We define multiple domains of haze styles by applying the K -means clustering to the background light of diverse real hazy images. We design the haze style modulator to extract the scene radiance features and the haze-related features, respectively. We employ the unpaired image-to-image translation methodology to transfer a source hazy image into different hazy images with diverse styles while preserving the scene radiance. The generated diverse hazy images are used to train the universal dehazing network in a semi-supervised manner, where we implement the dehazing as a special instance of HST into no haze style. The experimental results show that the proposed framework reliably generates realistic and diverse hazy images, and achieves better performance of universal dehazing regardless of the haze styles compared with the existing state-of-the art dehazing methods.
doi_str_mv 10.1109/TCSVT.2024.3386738
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TCSVT_2024_3386738</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10495365</ieee_id><sourcerecordid>3112217928</sourcerecordid><originalsourceid>FETCH-LOGICAL-c162t-f67ec032cd7996e28f85cac76321623967f967b97bd24698c49011e99656fbaa3</originalsourceid><addsrcrecordid>eNpNkE1LAzEQhoMoWKt_QDwseBO2Ziabr6PUaoWCh269hjRNdEvd1qQttL_e1PbgYZiBed4ZeAi5BdoDoPqx7o8_6h5SrHqMKSGZOiMd4FyViJSf55lyKBUCvyRXKc0phUpVskMeJm2z9THZRfHsv-y-aT-LbWOLod37YrzeLXxRR9um4OM1uQh2kfzNqXfJ5GVQ94fl6P31rf80Kh0IXJdBSO8oQzeTWguPKijurJOCYd4zLWTINdVyOsNKaOUqTQF8ZrkIU2tZl9wf767i8mfj09rMl5vY5peGASCC1KgyhUfKxWVK0Qezis23jTsD1BycmD8n5uDEnJzk0N0x1Hjv_wUqzZng7Bc1Qlt2</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3112217928</pqid></control><display><type>article</type><title>Universal Dehazing via Haze Style Transfer</title><source>IEEE Xplore (Online service)</source><creator>Park, Eunpil ; Yoo, Jaejun ; Sim, Jae-Young</creator><creatorcontrib>Park, Eunpil ; Yoo, Jaejun ; Sim, Jae-Young</creatorcontrib><description>Single image dehazing has been actively studied to overcome the quality degradation of hazy images. Most of the existing methods take model-based approaches and the existing learning-based methods usually target specific haze styles only, e.g., daytime, varicolored, and nighttime haze. Therefore, they suffer from the limited performance on arbitrary hazy images with diverse characteristics due to the lack of universal training dataset. In this paper, we first propose a fully data-driven learning-based framework for universal dehazing based on the haze style transfer (HST). We define multiple domains of haze styles by applying the &lt;inline-formula&gt; &lt;tex-math notation="LaTeX"&gt;K &lt;/tex-math&gt;&lt;/inline-formula&gt;-means clustering to the background light of diverse real hazy images. We design the haze style modulator to extract the scene radiance features and the haze-related features, respectively. We employ the unpaired image-to-image translation methodology to transfer a source hazy image into different hazy images with diverse styles while preserving the scene radiance. The generated diverse hazy images are used to train the universal dehazing network in a semi-supervised manner, where we implement the dehazing as a special instance of HST into no haze style. The experimental results show that the proposed framework reliably generates realistic and diverse hazy images, and achieves better performance of universal dehazing regardless of the haze styles compared with the existing state-of-the art dehazing methods.</description><identifier>ISSN: 1051-8215</identifier><identifier>EISSN: 1558-2205</identifier><identifier>DOI: 10.1109/TCSVT.2024.3386738</identifier><identifier>CODEN: ITCTEM</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Attenuation ; Clustering ; deep learning ; DH-HEMTs ; Feature extraction ; Haze ; Image color analysis ; Image degradation ; Image dehazing ; Image quality ; Learning ; Learning systems ; Light sources ; Radiance ; style transfer ; Training ; universal dehazing</subject><ispartof>IEEE transactions on circuits and systems for video technology, 2024-09, Vol.34 (9), p.8576-8588</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c162t-f67ec032cd7996e28f85cac76321623967f967b97bd24698c49011e99656fbaa3</cites><orcidid>0000-0002-1820-9078 ; 0000-0001-5252-9668 ; 0000-0003-2636-7444</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10495365$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids></links><search><creatorcontrib>Park, Eunpil</creatorcontrib><creatorcontrib>Yoo, Jaejun</creatorcontrib><creatorcontrib>Sim, Jae-Young</creatorcontrib><title>Universal Dehazing via Haze Style Transfer</title><title>IEEE transactions on circuits and systems for video technology</title><addtitle>TCSVT</addtitle><description>Single image dehazing has been actively studied to overcome the quality degradation of hazy images. Most of the existing methods take model-based approaches and the existing learning-based methods usually target specific haze styles only, e.g., daytime, varicolored, and nighttime haze. Therefore, they suffer from the limited performance on arbitrary hazy images with diverse characteristics due to the lack of universal training dataset. In this paper, we first propose a fully data-driven learning-based framework for universal dehazing based on the haze style transfer (HST). We define multiple domains of haze styles by applying the &lt;inline-formula&gt; &lt;tex-math notation="LaTeX"&gt;K &lt;/tex-math&gt;&lt;/inline-formula&gt;-means clustering to the background light of diverse real hazy images. We design the haze style modulator to extract the scene radiance features and the haze-related features, respectively. We employ the unpaired image-to-image translation methodology to transfer a source hazy image into different hazy images with diverse styles while preserving the scene radiance. The generated diverse hazy images are used to train the universal dehazing network in a semi-supervised manner, where we implement the dehazing as a special instance of HST into no haze style. The experimental results show that the proposed framework reliably generates realistic and diverse hazy images, and achieves better performance of universal dehazing regardless of the haze styles compared with the existing state-of-the art dehazing methods.</description><subject>Attenuation</subject><subject>Clustering</subject><subject>deep learning</subject><subject>DH-HEMTs</subject><subject>Feature extraction</subject><subject>Haze</subject><subject>Image color analysis</subject><subject>Image degradation</subject><subject>Image dehazing</subject><subject>Image quality</subject><subject>Learning</subject><subject>Learning systems</subject><subject>Light sources</subject><subject>Radiance</subject><subject>style transfer</subject><subject>Training</subject><subject>universal dehazing</subject><issn>1051-8215</issn><issn>1558-2205</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNpNkE1LAzEQhoMoWKt_QDwseBO2Ziabr6PUaoWCh269hjRNdEvd1qQttL_e1PbgYZiBed4ZeAi5BdoDoPqx7o8_6h5SrHqMKSGZOiMd4FyViJSf55lyKBUCvyRXKc0phUpVskMeJm2z9THZRfHsv-y-aT-LbWOLod37YrzeLXxRR9um4OM1uQh2kfzNqXfJ5GVQ94fl6P31rf80Kh0IXJdBSO8oQzeTWguPKijurJOCYd4zLWTINdVyOsNKaOUqTQF8ZrkIU2tZl9wf767i8mfj09rMl5vY5peGASCC1KgyhUfKxWVK0Qezis23jTsD1BycmD8n5uDEnJzk0N0x1Hjv_wUqzZng7Bc1Qlt2</recordid><startdate>202409</startdate><enddate>202409</enddate><creator>Park, Eunpil</creator><creator>Yoo, Jaejun</creator><creator>Sim, Jae-Young</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-1820-9078</orcidid><orcidid>https://orcid.org/0000-0001-5252-9668</orcidid><orcidid>https://orcid.org/0000-0003-2636-7444</orcidid></search><sort><creationdate>202409</creationdate><title>Universal Dehazing via Haze Style Transfer</title><author>Park, Eunpil ; Yoo, Jaejun ; Sim, Jae-Young</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c162t-f67ec032cd7996e28f85cac76321623967f967b97bd24698c49011e99656fbaa3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Attenuation</topic><topic>Clustering</topic><topic>deep learning</topic><topic>DH-HEMTs</topic><topic>Feature extraction</topic><topic>Haze</topic><topic>Image color analysis</topic><topic>Image degradation</topic><topic>Image dehazing</topic><topic>Image quality</topic><topic>Learning</topic><topic>Learning systems</topic><topic>Light sources</topic><topic>Radiance</topic><topic>style transfer</topic><topic>Training</topic><topic>universal dehazing</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Park, Eunpil</creatorcontrib><creatorcontrib>Yoo, Jaejun</creatorcontrib><creatorcontrib>Sim, Jae-Young</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEL</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on circuits and systems for video technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Park, Eunpil</au><au>Yoo, Jaejun</au><au>Sim, Jae-Young</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Universal Dehazing via Haze Style Transfer</atitle><jtitle>IEEE transactions on circuits and systems for video technology</jtitle><stitle>TCSVT</stitle><date>2024-09</date><risdate>2024</risdate><volume>34</volume><issue>9</issue><spage>8576</spage><epage>8588</epage><pages>8576-8588</pages><issn>1051-8215</issn><eissn>1558-2205</eissn><coden>ITCTEM</coden><abstract>Single image dehazing has been actively studied to overcome the quality degradation of hazy images. Most of the existing methods take model-based approaches and the existing learning-based methods usually target specific haze styles only, e.g., daytime, varicolored, and nighttime haze. Therefore, they suffer from the limited performance on arbitrary hazy images with diverse characteristics due to the lack of universal training dataset. In this paper, we first propose a fully data-driven learning-based framework for universal dehazing based on the haze style transfer (HST). We define multiple domains of haze styles by applying the &lt;inline-formula&gt; &lt;tex-math notation="LaTeX"&gt;K &lt;/tex-math&gt;&lt;/inline-formula&gt;-means clustering to the background light of diverse real hazy images. We design the haze style modulator to extract the scene radiance features and the haze-related features, respectively. We employ the unpaired image-to-image translation methodology to transfer a source hazy image into different hazy images with diverse styles while preserving the scene radiance. The generated diverse hazy images are used to train the universal dehazing network in a semi-supervised manner, where we implement the dehazing as a special instance of HST into no haze style. The experimental results show that the proposed framework reliably generates realistic and diverse hazy images, and achieves better performance of universal dehazing regardless of the haze styles compared with the existing state-of-the art dehazing methods.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TCSVT.2024.3386738</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0002-1820-9078</orcidid><orcidid>https://orcid.org/0000-0001-5252-9668</orcidid><orcidid>https://orcid.org/0000-0003-2636-7444</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1051-8215
ispartof IEEE transactions on circuits and systems for video technology, 2024-09, Vol.34 (9), p.8576-8588
issn 1051-8215
1558-2205
language eng
recordid cdi_crossref_primary_10_1109_TCSVT_2024_3386738
source IEEE Xplore (Online service)
subjects Attenuation
Clustering
deep learning
DH-HEMTs
Feature extraction
Haze
Image color analysis
Image degradation
Image dehazing
Image quality
Learning
Learning systems
Light sources
Radiance
style transfer
Training
universal dehazing
title Universal Dehazing via Haze Style Transfer
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T06%3A35%3A10IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Universal%20Dehazing%20via%20Haze%20Style%20Transfer&rft.jtitle=IEEE%20transactions%20on%20circuits%20and%20systems%20for%20video%20technology&rft.au=Park,%20Eunpil&rft.date=2024-09&rft.volume=34&rft.issue=9&rft.spage=8576&rft.epage=8588&rft.pages=8576-8588&rft.issn=1051-8215&rft.eissn=1558-2205&rft.coden=ITCTEM&rft_id=info:doi/10.1109/TCSVT.2024.3386738&rft_dat=%3Cproquest_cross%3E3112217928%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c162t-f67ec032cd7996e28f85cac76321623967f967b97bd24698c49011e99656fbaa3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3112217928&rft_id=info:pmid/&rft_ieee_id=10495365&rfr_iscdi=true