Loading…
A Novel Transformer-Based Attention Network for Image Dehazing
Image dehazing is challenging due to the problem of ill-posed parameter estimation. Numerous prior-based and learning-based methods have achieved great success. However, most learning-based methods use the changes and connections between scale and depth in convolutional neural networks for feature e...
Saved in:
Published in: | Sensors (Basel, Switzerland) Switzerland), 2022-04, Vol.22 (9), p.3428 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c508t-4545418f4cb43479af37597e7cf02c8c305b734d8e515acf10f9a6352a63f3663 |
---|---|
cites | cdi_FETCH-LOGICAL-c508t-4545418f4cb43479af37597e7cf02c8c305b734d8e515acf10f9a6352a63f3663 |
container_end_page | |
container_issue | 9 |
container_start_page | 3428 |
container_title | Sensors (Basel, Switzerland) |
container_volume | 22 |
creator | Gao, Guanlei Cao, Jie Bao, Chun Hao, Qun Ma, Aoqi Li, Gang |
description | Image dehazing is challenging due to the problem of ill-posed parameter estimation. Numerous prior-based and learning-based methods have achieved great success. However, most learning-based methods use the changes and connections between scale and depth in convolutional neural networks for feature extraction. Although the performance is greatly improved compared with the prior-based methods, the performance in extracting detailed information is inferior. In this paper, we proposed an image dehazing model built with a convolutional neural network and Transformer, called Transformer for image dehazing (TID). First, we propose a Transformer-based channel attention module (TCAM), using a spatial attention module as its supplement. These two modules form an attention module that enhances channel and spatial features. Second, we use a multiscale parallel residual network as the backbone, which can extract feature information of different scales to achieve feature fusion. We experimented on the RESIDE dataset, and then conducted extensive comparisons and ablation studies with state-of-the-art methods. Experimental results show that our proposed method effectively improves the quality of the restored image, and it is also better than the existing attention modules in performance. |
doi_str_mv | 10.3390/s22093428 |
format | article |
fullrecord | <record><control><sourceid>gale_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_ad51a6fa3cc14665b6cf53fbe362729e</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A781610859</galeid><doaj_id>oai_doaj_org_article_ad51a6fa3cc14665b6cf53fbe362729e</doaj_id><sourcerecordid>A781610859</sourcerecordid><originalsourceid>FETCH-LOGICAL-c508t-4545418f4cb43479af37597e7cf02c8c305b734d8e515acf10f9a6352a63f3663</originalsourceid><addsrcrecordid>eNpdkk1v1DAQhiMEoqVw4A-gSFzgkGJ7bMe-VFrK10pVuZSz5Tjj1EsSFztbBL8eL1tWLRrJtmaeee3Xmqp6SckpgCbvMmNEA2fqUXVMOeONKonH985H1bOcN4QwAFBPqyMQQlNK1XF1tqov4y2O9VWyc_YxTZia9zZjX6-WBeclxLm-xOVnTN_rUq7Xkx2w_oDX9neYh-fVE2_HjC_u9pPq26ePV-dfmouvn9fnq4vGCaKWhosSVHnuOg681dZDK3SLrfOEOeWAiK4F3isUVFjnKfHaShCsLB6khJNqvdfto92YmxQmm36ZaIP5m4hpMDYtwY1obC-old6Cc5RLKTrpvADfIUjWMo1F62yvdbPtJuxdMZns-ED0YWUO12aIt0ZTImTbFoE3dwIp_thiXswUssNxtDPGbTZMFkoLAFbQ1_-hm7hNc_mqHQWUEK121OmeGmwxEGYfy72uRI9TcHFGH0p-1SoqKVFCl4a3-waXYs4J_eH1lJjdSJjDSBT21X27B_LfDMAf6Aqucw</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2663100982</pqid></control><display><type>article</type><title>A Novel Transformer-Based Attention Network for Image Dehazing</title><source>Publicly Available Content (ProQuest)</source><source>PubMed Central</source><creator>Gao, Guanlei ; Cao, Jie ; Bao, Chun ; Hao, Qun ; Ma, Aoqi ; Li, Gang</creator><creatorcontrib>Gao, Guanlei ; Cao, Jie ; Bao, Chun ; Hao, Qun ; Ma, Aoqi ; Li, Gang</creatorcontrib><description>Image dehazing is challenging due to the problem of ill-posed parameter estimation. Numerous prior-based and learning-based methods have achieved great success. However, most learning-based methods use the changes and connections between scale and depth in convolutional neural networks for feature extraction. Although the performance is greatly improved compared with the prior-based methods, the performance in extracting detailed information is inferior. In this paper, we proposed an image dehazing model built with a convolutional neural network and Transformer, called Transformer for image dehazing (TID). First, we propose a Transformer-based channel attention module (TCAM), using a spatial attention module as its supplement. These two modules form an attention module that enhances channel and spatial features. Second, we use a multiscale parallel residual network as the backbone, which can extract feature information of different scales to achieve feature fusion. We experimented on the RESIDE dataset, and then conducted extensive comparisons and ablation studies with state-of-the-art methods. Experimental results show that our proposed method effectively improves the quality of the restored image, and it is also better than the existing attention modules in performance.</description><identifier>ISSN: 1424-8220</identifier><identifier>EISSN: 1424-8220</identifier><identifier>DOI: 10.3390/s22093428</identifier><identifier>PMID: 35591118</identifier><language>eng</language><publisher>Switzerland: MDPI AG</publisher><subject>Ablation ; Computational linguistics ; convolutional neural network ; image dehazing ; Image quality ; Language processing ; Light ; Methods ; Natural language interfaces ; Neural networks ; Parameter estimation ; Remote sensing ; Transformer</subject><ispartof>Sensors (Basel, Switzerland), 2022-04, Vol.22 (9), p.3428</ispartof><rights>COPYRIGHT 2022 MDPI AG</rights><rights>2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>2022 by the authors. 2022</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c508t-4545418f4cb43479af37597e7cf02c8c305b734d8e515acf10f9a6352a63f3663</citedby><cites>FETCH-LOGICAL-c508t-4545418f4cb43479af37597e7cf02c8c305b734d8e515acf10f9a6352a63f3663</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2663100982/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2663100982?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>230,314,727,780,784,885,25752,27923,27924,37011,37012,44589,53790,53792,74897</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/35591118$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Gao, Guanlei</creatorcontrib><creatorcontrib>Cao, Jie</creatorcontrib><creatorcontrib>Bao, Chun</creatorcontrib><creatorcontrib>Hao, Qun</creatorcontrib><creatorcontrib>Ma, Aoqi</creatorcontrib><creatorcontrib>Li, Gang</creatorcontrib><title>A Novel Transformer-Based Attention Network for Image Dehazing</title><title>Sensors (Basel, Switzerland)</title><addtitle>Sensors (Basel)</addtitle><description>Image dehazing is challenging due to the problem of ill-posed parameter estimation. Numerous prior-based and learning-based methods have achieved great success. However, most learning-based methods use the changes and connections between scale and depth in convolutional neural networks for feature extraction. Although the performance is greatly improved compared with the prior-based methods, the performance in extracting detailed information is inferior. In this paper, we proposed an image dehazing model built with a convolutional neural network and Transformer, called Transformer for image dehazing (TID). First, we propose a Transformer-based channel attention module (TCAM), using a spatial attention module as its supplement. These two modules form an attention module that enhances channel and spatial features. Second, we use a multiscale parallel residual network as the backbone, which can extract feature information of different scales to achieve feature fusion. We experimented on the RESIDE dataset, and then conducted extensive comparisons and ablation studies with state-of-the-art methods. Experimental results show that our proposed method effectively improves the quality of the restored image, and it is also better than the existing attention modules in performance.</description><subject>Ablation</subject><subject>Computational linguistics</subject><subject>convolutional neural network</subject><subject>image dehazing</subject><subject>Image quality</subject><subject>Language processing</subject><subject>Light</subject><subject>Methods</subject><subject>Natural language interfaces</subject><subject>Neural networks</subject><subject>Parameter estimation</subject><subject>Remote sensing</subject><subject>Transformer</subject><issn>1424-8220</issn><issn>1424-8220</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNpdkk1v1DAQhiMEoqVw4A-gSFzgkGJ7bMe-VFrK10pVuZSz5Tjj1EsSFztbBL8eL1tWLRrJtmaeee3Xmqp6SckpgCbvMmNEA2fqUXVMOeONKonH985H1bOcN4QwAFBPqyMQQlNK1XF1tqov4y2O9VWyc_YxTZia9zZjX6-WBeclxLm-xOVnTN_rUq7Xkx2w_oDX9neYh-fVE2_HjC_u9pPq26ePV-dfmouvn9fnq4vGCaKWhosSVHnuOg681dZDK3SLrfOEOeWAiK4F3isUVFjnKfHaShCsLB6khJNqvdfto92YmxQmm36ZaIP5m4hpMDYtwY1obC-old6Cc5RLKTrpvADfIUjWMo1F62yvdbPtJuxdMZns-ED0YWUO12aIt0ZTImTbFoE3dwIp_thiXswUssNxtDPGbTZMFkoLAFbQ1_-hm7hNc_mqHQWUEK121OmeGmwxEGYfy72uRI9TcHFGH0p-1SoqKVFCl4a3-waXYs4J_eH1lJjdSJjDSBT21X27B_LfDMAf6Aqucw</recordid><startdate>20220430</startdate><enddate>20220430</enddate><creator>Gao, Guanlei</creator><creator>Cao, Jie</creator><creator>Bao, Chun</creator><creator>Hao, Qun</creator><creator>Ma, Aoqi</creator><creator>Li, Gang</creator><general>MDPI AG</general><general>MDPI</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>K9.</scope><scope>M0S</scope><scope>M1P</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope></search><sort><creationdate>20220430</creationdate><title>A Novel Transformer-Based Attention Network for Image Dehazing</title><author>Gao, Guanlei ; Cao, Jie ; Bao, Chun ; Hao, Qun ; Ma, Aoqi ; Li, Gang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c508t-4545418f4cb43479af37597e7cf02c8c305b734d8e515acf10f9a6352a63f3663</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Ablation</topic><topic>Computational linguistics</topic><topic>convolutional neural network</topic><topic>image dehazing</topic><topic>Image quality</topic><topic>Language processing</topic><topic>Light</topic><topic>Methods</topic><topic>Natural language interfaces</topic><topic>Neural networks</topic><topic>Parameter estimation</topic><topic>Remote sensing</topic><topic>Transformer</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Gao, Guanlei</creatorcontrib><creatorcontrib>Cao, Jie</creatorcontrib><creatorcontrib>Bao, Chun</creatorcontrib><creatorcontrib>Hao, Qun</creatorcontrib><creatorcontrib>Ma, Aoqi</creatorcontrib><creatorcontrib>Li, Gang</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>ProQuest_Health & Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>Health & Medical Collection (Alumni Edition)</collection><collection>PML(ProQuest Medical Library)</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Sensors (Basel, Switzerland)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Gao, Guanlei</au><au>Cao, Jie</au><au>Bao, Chun</au><au>Hao, Qun</au><au>Ma, Aoqi</au><au>Li, Gang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Novel Transformer-Based Attention Network for Image Dehazing</atitle><jtitle>Sensors (Basel, Switzerland)</jtitle><addtitle>Sensors (Basel)</addtitle><date>2022-04-30</date><risdate>2022</risdate><volume>22</volume><issue>9</issue><spage>3428</spage><pages>3428-</pages><issn>1424-8220</issn><eissn>1424-8220</eissn><abstract>Image dehazing is challenging due to the problem of ill-posed parameter estimation. Numerous prior-based and learning-based methods have achieved great success. However, most learning-based methods use the changes and connections between scale and depth in convolutional neural networks for feature extraction. Although the performance is greatly improved compared with the prior-based methods, the performance in extracting detailed information is inferior. In this paper, we proposed an image dehazing model built with a convolutional neural network and Transformer, called Transformer for image dehazing (TID). First, we propose a Transformer-based channel attention module (TCAM), using a spatial attention module as its supplement. These two modules form an attention module that enhances channel and spatial features. Second, we use a multiscale parallel residual network as the backbone, which can extract feature information of different scales to achieve feature fusion. We experimented on the RESIDE dataset, and then conducted extensive comparisons and ablation studies with state-of-the-art methods. Experimental results show that our proposed method effectively improves the quality of the restored image, and it is also better than the existing attention modules in performance.</abstract><cop>Switzerland</cop><pub>MDPI AG</pub><pmid>35591118</pmid><doi>10.3390/s22093428</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1424-8220 |
ispartof | Sensors (Basel, Switzerland), 2022-04, Vol.22 (9), p.3428 |
issn | 1424-8220 1424-8220 |
language | eng |
recordid | cdi_doaj_primary_oai_doaj_org_article_ad51a6fa3cc14665b6cf53fbe362729e |
source | Publicly Available Content (ProQuest); PubMed Central |
subjects | Ablation Computational linguistics convolutional neural network image dehazing Image quality Language processing Light Methods Natural language interfaces Neural networks Parameter estimation Remote sensing Transformer |
title | A Novel Transformer-Based Attention Network for Image Dehazing |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T08%3A54%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Novel%20Transformer-Based%20Attention%20Network%20for%20Image%20Dehazing&rft.jtitle=Sensors%20(Basel,%20Switzerland)&rft.au=Gao,%20Guanlei&rft.date=2022-04-30&rft.volume=22&rft.issue=9&rft.spage=3428&rft.pages=3428-&rft.issn=1424-8220&rft.eissn=1424-8220&rft_id=info:doi/10.3390/s22093428&rft_dat=%3Cgale_doaj_%3EA781610859%3C/gale_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c508t-4545418f4cb43479af37597e7cf02c8c305b734d8e515acf10f9a6352a63f3663%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2663100982&rft_id=info:pmid/35591118&rft_galeid=A781610859&rfr_iscdi=true |