Loading…
Recurrent context-aware multi-stage network for single image deraining
Single image rain streak removal is extremely necessary since rainy images can seriously affect many computer vision systems. In this paper, we propose a novel recurrent context-aware multi-stage network (ReCMN) for image rain removal that gradually predicts clean derained results. Specifically, the...
Saved in:
Published in: | Computer vision and image understanding 2023-01, Vol.227, p.103612, Article 103612 |
---|---|
Main Authors: | , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Single image rain streak removal is extremely necessary since rainy images can seriously affect many computer vision systems. In this paper, we propose a novel recurrent context-aware multi-stage network (ReCMN) for image rain removal that gradually predicts clean derained results. Specifically, the ReCMN introduces a multi-stage strategy to perform contextual relationship modeling. Firstly, we use the densely residual extraction block (DREB) to guide feature extraction. Then, a multi-scale context aggregation block (MCAB) is designed to utilize the long-distance dependencies and multiple scale features, which can fuse features of different levels to fully exploit contextual information. Finally, we develop a parallel attention block (PAB) to capture the channel and spatial information and only pass effective feature representation. Experimental results demonstrate that our method outperforms several state-of-the-art methods, based on both synthetic datasets and real-world rainy images.
•Propose ReCMN, a recurrent multi-stage deraining network to generate clean images.•Introduce MCAB to fuse features and capture contextual information.•Apply PAB to obtain informative features from the channel and spatial dimensions.•Show state-of-the-art performance on both real-world and synthetic datasets. |
---|---|
ISSN: | 1077-3142 1090-235X |
DOI: | 10.1016/j.cviu.2022.103612 |