Loading…

Feature attention gated context aggregation network for single image dehazing and its application on unmanned aerial vehicle images

Single‐image dehazing is a highly challenging ill‐posed task in the field of computer vision. To address this, a new image dehazing model with feature attention, named feature attention gated context aggregation network (FAGCA‐Net), is proposed to tackle the issues of incomplete or over‐dehazing cau...

Full description

Saved in:
Bibliographic Details
Published in:IET cyber-physical systems 2024-09, Vol.9 (3), p.218-227
Main Authors: Wu, Yongquan, Zhao, Xuan, Zhang, Xinsheng, Long, Tao, Luo, Ping
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c3346-402ec008c89c8f04d1f7203bb7123c16922b032aad04a6b1312a9556f013185f3
container_end_page 227
container_issue 3
container_start_page 218
container_title IET cyber-physical systems
container_volume 9
creator Wu, Yongquan
Zhao, Xuan
Zhang, Xinsheng
Long, Tao
Luo, Ping
description Single‐image dehazing is a highly challenging ill‐posed task in the field of computer vision. To address this, a new image dehazing model with feature attention, named feature attention gated context aggregation network (FAGCA‐Net), is proposed to tackle the issues of incomplete or over‐dehazing caused by the original model's inability to handle non‐uniform haze density distributions. A feature attention module that combines channel attention and spatial attention is introduced. Additionally, the authors propose a new extended attention convolutional block, which not only addresses the grid artefacts caused by the extended convolution but also provides added flexibility in handling different types of feature information. At the same time, in addition to the input image itself, incorporating the dark channel and edge channel of the image as the final input of the model is helpful for the model learning process. To demonstrate the robustness of the new model, it is applied to two completely different dehazing datasets, and it achieves significant dehazing performance improvement over the original model. Finally, to verify the effectiveness of the model in practical production processes, the authors apply it as an image preprocessing step to a set of UAV (Unmanned Aerial Vehicle) images of foreign objects. The result shows that the UAV images after being processed by FAGCA‐Net for haze removal have a better impact on subsequent usage. To address the issue of incomplete or over‐dehazing caused by previous models inability to handle non‐uniform hazy images, a new dehazing model named Feature Attention Gated Context Aggregation Network (FAGCA‐Net) is presented. A new dilated attention convolution block is proposed, which not only deals with the grid artefacts caused by dilated convolution but also has additional flexibility in handling different types of feature information. To demonstrate the model’s robustness, it is applied to two completely different dehazing datasets and achieves significant performance improvement over previous models. Furthermore, to verify the effectiveness of the model in practical production processes, it is applied as an image preprocessing step to a set of Unmanned Aerial Vehicle (UAV) images. The results show that the UAV images processed by FAGCA‐Net for haze removal are more suitable for subsequent usage.
doi_str_mv 10.1049/cps2.12076
format article
fullrecord <record><control><sourceid>wiley_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_a5bc7015a3204fb7a598ab42f8b6b772</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_a5bc7015a3204fb7a598ab42f8b6b772</doaj_id><sourcerecordid>CPS212076</sourcerecordid><originalsourceid>FETCH-LOGICAL-c3346-402ec008c89c8f04d1f7203bb7123c16922b032aad04a6b1312a9556f013185f3</originalsourceid><addsrcrecordid>eNp9kUtLxDAUhYsoOIxu_AVZC9WbpM-lDL5AUFDX4SZNOhk76ZDEx7j1j9taFVdCIJeTc74bOElyROGEQlafqk1gJ5RBWewkM8brKuW8Lnb_zPvJYQgrAGBVmeUcZsnHhcb47DXBGLWLtnekxagbonoX9Vsk2LZeD9L44nR87f0TMb0nwbq208SusdWk0Ut8HwSCriE2BoKbTWfVlBrOs1ujcwMVtbfYkRe9tOonHQ6SPYNd0Iff9zx5vDh_WFylN7eX14uzm1RxnhVpBkwrgEpVtaoMZA01JQMuZUkZV7SoGZPAGWIDGRaScsqwzvPCwDBWueHz5HriNj2uxMYP2_1W9GjFl9D7VqCP48cE5lKVQHPkDDIjS8zrCmXGTCULWZZsYB1PLOX7ELw2vzwKYmxDjG2IrzYGM53Mr7bT23-cYnF3z6bMJyFIjZ4</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Feature attention gated context aggregation network for single image dehazing and its application on unmanned aerial vehicle images</title><source>IET Digital Library</source><source>Wiley Online Library Open Access</source><source>Publicly Available Content Database</source><creator>Wu, Yongquan ; Zhao, Xuan ; Zhang, Xinsheng ; Long, Tao ; Luo, Ping</creator><creatorcontrib>Wu, Yongquan ; Zhao, Xuan ; Zhang, Xinsheng ; Long, Tao ; Luo, Ping</creatorcontrib><description>Single‐image dehazing is a highly challenging ill‐posed task in the field of computer vision. To address this, a new image dehazing model with feature attention, named feature attention gated context aggregation network (FAGCA‐Net), is proposed to tackle the issues of incomplete or over‐dehazing caused by the original model's inability to handle non‐uniform haze density distributions. A feature attention module that combines channel attention and spatial attention is introduced. Additionally, the authors propose a new extended attention convolutional block, which not only addresses the grid artefacts caused by the extended convolution but also provides added flexibility in handling different types of feature information. At the same time, in addition to the input image itself, incorporating the dark channel and edge channel of the image as the final input of the model is helpful for the model learning process. To demonstrate the robustness of the new model, it is applied to two completely different dehazing datasets, and it achieves significant dehazing performance improvement over the original model. Finally, to verify the effectiveness of the model in practical production processes, the authors apply it as an image preprocessing step to a set of UAV (Unmanned Aerial Vehicle) images of foreign objects. The result shows that the UAV images after being processed by FAGCA‐Net for haze removal have a better impact on subsequent usage. To address the issue of incomplete or over‐dehazing caused by previous models inability to handle non‐uniform hazy images, a new dehazing model named Feature Attention Gated Context Aggregation Network (FAGCA‐Net) is presented. A new dilated attention convolution block is proposed, which not only deals with the grid artefacts caused by dilated convolution but also has additional flexibility in handling different types of feature information. To demonstrate the model’s robustness, it is applied to two completely different dehazing datasets and achieves significant performance improvement over previous models. Furthermore, to verify the effectiveness of the model in practical production processes, it is applied as an image preprocessing step to a set of Unmanned Aerial Vehicle (UAV) images. The results show that the UAV images processed by FAGCA‐Net for haze removal are more suitable for subsequent usage.</description><identifier>ISSN: 2398-3396</identifier><identifier>EISSN: 2398-3396</identifier><identifier>DOI: 10.1049/cps2.12076</identifier><language>eng</language><publisher>Wiley</publisher><subject>computer vision ; learning (artificial intelligence)</subject><ispartof>IET cyber-physical systems, 2024-09, Vol.9 (3), p.218-227</ispartof><rights>2023 The Authors. published by John Wiley &amp; Sons Ltd on behalf of The Institution of Engineering and Technology.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c3346-402ec008c89c8f04d1f7203bb7123c16922b032aad04a6b1312a9556f013185f3</cites><orcidid>0009-0006-2746-8342</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1049%2Fcps2.12076$$EPDF$$P50$$Gwiley$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1049%2Fcps2.12076$$EHTML$$P50$$Gwiley$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,11560,27922,27923,46050,46474</link.rule.ids></links><search><creatorcontrib>Wu, Yongquan</creatorcontrib><creatorcontrib>Zhao, Xuan</creatorcontrib><creatorcontrib>Zhang, Xinsheng</creatorcontrib><creatorcontrib>Long, Tao</creatorcontrib><creatorcontrib>Luo, Ping</creatorcontrib><title>Feature attention gated context aggregation network for single image dehazing and its application on unmanned aerial vehicle images</title><title>IET cyber-physical systems</title><description>Single‐image dehazing is a highly challenging ill‐posed task in the field of computer vision. To address this, a new image dehazing model with feature attention, named feature attention gated context aggregation network (FAGCA‐Net), is proposed to tackle the issues of incomplete or over‐dehazing caused by the original model's inability to handle non‐uniform haze density distributions. A feature attention module that combines channel attention and spatial attention is introduced. Additionally, the authors propose a new extended attention convolutional block, which not only addresses the grid artefacts caused by the extended convolution but also provides added flexibility in handling different types of feature information. At the same time, in addition to the input image itself, incorporating the dark channel and edge channel of the image as the final input of the model is helpful for the model learning process. To demonstrate the robustness of the new model, it is applied to two completely different dehazing datasets, and it achieves significant dehazing performance improvement over the original model. Finally, to verify the effectiveness of the model in practical production processes, the authors apply it as an image preprocessing step to a set of UAV (Unmanned Aerial Vehicle) images of foreign objects. The result shows that the UAV images after being processed by FAGCA‐Net for haze removal have a better impact on subsequent usage. To address the issue of incomplete or over‐dehazing caused by previous models inability to handle non‐uniform hazy images, a new dehazing model named Feature Attention Gated Context Aggregation Network (FAGCA‐Net) is presented. A new dilated attention convolution block is proposed, which not only deals with the grid artefacts caused by dilated convolution but also has additional flexibility in handling different types of feature information. To demonstrate the model’s robustness, it is applied to two completely different dehazing datasets and achieves significant performance improvement over previous models. Furthermore, to verify the effectiveness of the model in practical production processes, it is applied as an image preprocessing step to a set of Unmanned Aerial Vehicle (UAV) images. The results show that the UAV images processed by FAGCA‐Net for haze removal are more suitable for subsequent usage.</description><subject>computer vision</subject><subject>learning (artificial intelligence)</subject><issn>2398-3396</issn><issn>2398-3396</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>24P</sourceid><sourceid>DOA</sourceid><recordid>eNp9kUtLxDAUhYsoOIxu_AVZC9WbpM-lDL5AUFDX4SZNOhk76ZDEx7j1j9taFVdCIJeTc74bOElyROGEQlafqk1gJ5RBWewkM8brKuW8Lnb_zPvJYQgrAGBVmeUcZsnHhcb47DXBGLWLtnekxagbonoX9Vsk2LZeD9L44nR87f0TMb0nwbq208SusdWk0Ut8HwSCriE2BoKbTWfVlBrOs1ujcwMVtbfYkRe9tOonHQ6SPYNd0Iff9zx5vDh_WFylN7eX14uzm1RxnhVpBkwrgEpVtaoMZA01JQMuZUkZV7SoGZPAGWIDGRaScsqwzvPCwDBWueHz5HriNj2uxMYP2_1W9GjFl9D7VqCP48cE5lKVQHPkDDIjS8zrCmXGTCULWZZsYB1PLOX7ELw2vzwKYmxDjG2IrzYGM53Mr7bT23-cYnF3z6bMJyFIjZ4</recordid><startdate>202409</startdate><enddate>202409</enddate><creator>Wu, Yongquan</creator><creator>Zhao, Xuan</creator><creator>Zhang, Xinsheng</creator><creator>Long, Tao</creator><creator>Luo, Ping</creator><general>Wiley</general><scope>24P</scope><scope>WIN</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>DOA</scope><orcidid>https://orcid.org/0009-0006-2746-8342</orcidid></search><sort><creationdate>202409</creationdate><title>Feature attention gated context aggregation network for single image dehazing and its application on unmanned aerial vehicle images</title><author>Wu, Yongquan ; Zhao, Xuan ; Zhang, Xinsheng ; Long, Tao ; Luo, Ping</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c3346-402ec008c89c8f04d1f7203bb7123c16922b032aad04a6b1312a9556f013185f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>computer vision</topic><topic>learning (artificial intelligence)</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wu, Yongquan</creatorcontrib><creatorcontrib>Zhao, Xuan</creatorcontrib><creatorcontrib>Zhang, Xinsheng</creatorcontrib><creatorcontrib>Long, Tao</creatorcontrib><creatorcontrib>Luo, Ping</creatorcontrib><collection>Wiley Online Library Open Access</collection><collection>Wiley Online Library Free Content</collection><collection>CrossRef</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IET cyber-physical systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wu, Yongquan</au><au>Zhao, Xuan</au><au>Zhang, Xinsheng</au><au>Long, Tao</au><au>Luo, Ping</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Feature attention gated context aggregation network for single image dehazing and its application on unmanned aerial vehicle images</atitle><jtitle>IET cyber-physical systems</jtitle><date>2024-09</date><risdate>2024</risdate><volume>9</volume><issue>3</issue><spage>218</spage><epage>227</epage><pages>218-227</pages><issn>2398-3396</issn><eissn>2398-3396</eissn><abstract>Single‐image dehazing is a highly challenging ill‐posed task in the field of computer vision. To address this, a new image dehazing model with feature attention, named feature attention gated context aggregation network (FAGCA‐Net), is proposed to tackle the issues of incomplete or over‐dehazing caused by the original model's inability to handle non‐uniform haze density distributions. A feature attention module that combines channel attention and spatial attention is introduced. Additionally, the authors propose a new extended attention convolutional block, which not only addresses the grid artefacts caused by the extended convolution but also provides added flexibility in handling different types of feature information. At the same time, in addition to the input image itself, incorporating the dark channel and edge channel of the image as the final input of the model is helpful for the model learning process. To demonstrate the robustness of the new model, it is applied to two completely different dehazing datasets, and it achieves significant dehazing performance improvement over the original model. Finally, to verify the effectiveness of the model in practical production processes, the authors apply it as an image preprocessing step to a set of UAV (Unmanned Aerial Vehicle) images of foreign objects. The result shows that the UAV images after being processed by FAGCA‐Net for haze removal have a better impact on subsequent usage. To address the issue of incomplete or over‐dehazing caused by previous models inability to handle non‐uniform hazy images, a new dehazing model named Feature Attention Gated Context Aggregation Network (FAGCA‐Net) is presented. A new dilated attention convolution block is proposed, which not only deals with the grid artefacts caused by dilated convolution but also has additional flexibility in handling different types of feature information. To demonstrate the model’s robustness, it is applied to two completely different dehazing datasets and achieves significant performance improvement over previous models. Furthermore, to verify the effectiveness of the model in practical production processes, it is applied as an image preprocessing step to a set of Unmanned Aerial Vehicle (UAV) images. The results show that the UAV images processed by FAGCA‐Net for haze removal are more suitable for subsequent usage.</abstract><pub>Wiley</pub><doi>10.1049/cps2.12076</doi><tpages>10</tpages><orcidid>https://orcid.org/0009-0006-2746-8342</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2398-3396
ispartof IET cyber-physical systems, 2024-09, Vol.9 (3), p.218-227
issn 2398-3396
2398-3396
language eng
recordid cdi_doaj_primary_oai_doaj_org_article_a5bc7015a3204fb7a598ab42f8b6b772
source IET Digital Library; Wiley Online Library Open Access; Publicly Available Content Database
subjects computer vision
learning (artificial intelligence)
title Feature attention gated context aggregation network for single image dehazing and its application on unmanned aerial vehicle images
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T12%3A15%3A51IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-wiley_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Feature%20attention%20gated%20context%20aggregation%20network%20for%20single%20image%20dehazing%20and%20its%20application%20on%20unmanned%20aerial%20vehicle%20images&rft.jtitle=IET%20cyber-physical%20systems&rft.au=Wu,%20Yongquan&rft.date=2024-09&rft.volume=9&rft.issue=3&rft.spage=218&rft.epage=227&rft.pages=218-227&rft.issn=2398-3396&rft.eissn=2398-3396&rft_id=info:doi/10.1049/cps2.12076&rft_dat=%3Cwiley_doaj_%3ECPS212076%3C/wiley_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c3346-402ec008c89c8f04d1f7203bb7123c16922b032aad04a6b1312a9556f013185f3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true