Loading…

A Progressive Single-Image Dehazing Network With Feedback Mechanism

In the past decade, deep learning methods, especially convolutional neural networks, have received much attention in applications of single-image dehazing. However, the haze in hazy images cannot be distinctly separated because it is complicatedly mixed with the background components. If we roughly...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access 2021, Vol.9, p.158091-158097
Main Authors: Liang, Tisong, Li, Zhiwei, Ren, Yuanhong, Mao, Qi, Zhou, Wuneng
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In the past decade, deep learning methods, especially convolutional neural networks, have received much attention in applications of single-image dehazing. However, the haze in hazy images cannot be distinctly separated because it is complicatedly mixed with the background components. If we roughly remove the haze, the background tone of global atmospheric light may also be destroyed. To resolve the above problem and reconstruct clearer and higher-quality dehazing images, we introduced our progressive feedback network (PFBN) in recurrent structure ties with a feedback mechanism. The feedback mechanism is implemented by stacking feedback blocks that contain feedback connections among iterations. At the input layer of each feedback block, its hidden state in the last iteration is delivered by a feedback connection to the present block as part of the input. The last hidden state, also referred to as high-level information, is fused with low-level information output by the previous block to generate effective feature representation. Moreover, we proposed an enhancement self-ensemble strategy to decrease the random error of the network to reconstruct clearer dehazing images. Finally, we designed a series of extensive experiments to verify the outstanding performance of our method.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2021.3130468