Loading…
Training-free prior guided diffusion model for zero-reference low-light image enhancement
Images captured under poor illumination not only struggle to provide satisfactory visual information but also adversely affect high-level visual tasks. Therefore, we delve into low-light image enhancement. We mainly focus on two practical challenges: (1) previous methods predominantly require superv...
Saved in:
Published in: | Neurocomputing (Amsterdam) 2025-02, Vol.617, p.128974, Article 128974 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Images captured under poor illumination not only struggle to provide satisfactory visual information but also adversely affect high-level visual tasks. Therefore, we delve into low-light image enhancement. We mainly focus on two practical challenges: (1) previous methods predominantly require supervised training with paired data, tending to learn mappings specific to the training data, which limits their generalization ability on unseen images. (2) existing unsupervised methods usually yield sub-optimal image quality due to the insufficient utilization of image priors. To address these challenges, we propose a training-free Prior Guided Diffusion model, namely PGDiff, for zero-reference low-light image enhancement. Specifically, to leverage the implicit information within the degraded image, we propose a frequency-guided mechanism to obtain low-frequency features through bright channel prior, which combined with the generative prior of the pre-trained diffusion model to recover high-frequency details. To improve the quality of generated images, we further introduce the gradient guidance based on image exposure and color priors. Benefiting from this dual-guided mechanism, PGDiff can produce high-quality restoration results without requiring tedious training or paired reference images. Extensive experiments on paired and unpaired datasets show that our training-free method achieves competitive performance against existing learning-based methods, surpassing the state-of-the-art method QuadPrior by 0.25 dB in PSNR on the LOL dataset.
•We explore four types of image priors to achieve zero-reference learning•We improve bright channel prior to effectively improve the brightness of image.•A training-free diffusion model guided by priors is meticulously designed. |
---|---|
ISSN: | 0925-2312 |
DOI: | 10.1016/j.neucom.2024.128974 |