Loading…

R2RNet: Low-light image enhancement via Real-low to Real-normal Network

Images captured in weak illumination conditions could seriously degrade the image quality. Solving a series of degradation of low-light images can effectively improve the visual quality of images and the performance of high-level visual tasks. In this study, a novel Retinex-based Real-low to Real-no...

Full description

Saved in:
Bibliographic Details
Published in:Journal of visual communication and image representation 2023-02, Vol.90, p.103712, Article 103712
Main Authors: Hai, Jiang, Xuan, Zhu, Yang, Ren, Hao, Yutong, Zou, Fengzhu, Lin, Fang, Han, Songchen
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Images captured in weak illumination conditions could seriously degrade the image quality. Solving a series of degradation of low-light images can effectively improve the visual quality of images and the performance of high-level visual tasks. In this study, a novel Retinex-based Real-low to Real-normal Network (R2RNet) is proposed for low-light image enhancement, which includes three subnets: a Decom-Net, a Denoise-Net, and a Relight-Net. These three subnets are used for decomposing, denoising, contrast enhancement and detail preservation, respectively. Our R2RNet not only uses the spatial information of the image to improve the contrast but also uses the frequency information to preserve the details. Therefore, our model achieved more robust results for all degraded images. Unlike most previous methods that were trained on synthetic images, we collected the first Large-Scale Real-World paired low/normal-light images dataset (LSRW dataset) to satisfy the training requirements and make our model have better generalization performance in real-world scenes. Extensive experiments on publicly available datasets demonstrated that our method outperforms the existing state-of-the-art methods both quantitatively and visually. In addition, our results showed that the performance of the high-level visual task (i.e., face detection) can be effectively improved by using the enhanced results obtained by our method in low-light conditions. Our codes and the LSRW dataset are available at: https://github.com/JianghaiSCU/R2RNet. •A DN-ResUnet is proposed that can stack more layers and has fewer parameters.•A spatial and frequency combined Relight-Net is proposed.•A frequency loss is proposed to recover more details in the illumination map.•The LSRW dataset is introduced to satisfy the training requirements.•The experimental results demonstrate the superiority of our method.
ISSN:1047-3203
1095-9076
DOI:10.1016/j.jvcir.2022.103712