Loading…

Training-free prior guided diffusion model for zero-reference low-light image enhancement

Images captured under poor illumination not only struggle to provide satisfactory visual information but also adversely affect high-level visual tasks. Therefore, we delve into low-light image enhancement. We mainly focus on two practical challenges: (1) previous methods predominantly require superv...

Full description

Saved in:
Bibliographic Details
Published in:Neurocomputing (Amsterdam) 2025-02, Vol.617, p.128974, Article 128974
Main Authors: Shang, Kai, Shao, Mingwen, Wang, Chao, Qiao, Yuanjian, Wan, Yecong
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c185t-3d4564aaa106bc7f41a98ac29680578a27b3a45dac6ccd065e714d4df6d131793
container_end_page
container_issue
container_start_page 128974
container_title Neurocomputing (Amsterdam)
container_volume 617
creator Shang, Kai
Shao, Mingwen
Wang, Chao
Qiao, Yuanjian
Wan, Yecong
description Images captured under poor illumination not only struggle to provide satisfactory visual information but also adversely affect high-level visual tasks. Therefore, we delve into low-light image enhancement. We mainly focus on two practical challenges: (1) previous methods predominantly require supervised training with paired data, tending to learn mappings specific to the training data, which limits their generalization ability on unseen images. (2) existing unsupervised methods usually yield sub-optimal image quality due to the insufficient utilization of image priors. To address these challenges, we propose a training-free Prior Guided Diffusion model, namely PGDiff, for zero-reference low-light image enhancement. Specifically, to leverage the implicit information within the degraded image, we propose a frequency-guided mechanism to obtain low-frequency features through bright channel prior, which combined with the generative prior of the pre-trained diffusion model to recover high-frequency details. To improve the quality of generated images, we further introduce the gradient guidance based on image exposure and color priors. Benefiting from this dual-guided mechanism, PGDiff can produce high-quality restoration results without requiring tedious training or paired reference images. Extensive experiments on paired and unpaired datasets show that our training-free method achieves competitive performance against existing learning-based methods, surpassing the state-of-the-art method QuadPrior by 0.25 dB in PSNR on the LOL dataset. •We explore four types of image priors to achieve zero-reference learning•We improve bright channel prior to effectively improve the brightness of image.•A training-free diffusion model guided by priors is meticulously designed.
doi_str_mv 10.1016/j.neucom.2024.128974
format article
fullrecord <record><control><sourceid>elsevier_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1016_j_neucom_2024_128974</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0925231224017454</els_id><sourcerecordid>S0925231224017454</sourcerecordid><originalsourceid>FETCH-LOGICAL-c185t-3d4564aaa106bc7f41a98ac29680578a27b3a45dac6ccd065e714d4df6d131793</originalsourceid><addsrcrecordid>eNp9kM1KxDAUhbNQcBx9Axd5gdQkTZN2I8jgHwy4GReuQia56WRoG0laRZ_eDnXt6sK5nMM5H0I3jBaMMnl7LAaYbOwLTrkoGK8bJc7Qija8Irxk_AJd5nyklCnGmxV63yUThjC0xCcA_JFCTLidggOHXfB-yiEOuI8OOuzn1w-kSBJ4SDBYwF38Il1oDyMOvWkBw3Aws97DMF6hc2-6DNd_d43eHh92m2eyfX162dxviWV1NZLSiUoKYwyjcm-VF8w0tbG8kTWtVG242pdGVM5Yaa2jsgLFhBPOS8dKpppyjcSSa1PMea6m5xG9Sd-aUX1Coo96QaJPSPSCZLbdLTaYu30GSDrbcNrkQgI7ahfD_wG_P7hvtg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Training-free prior guided diffusion model for zero-reference low-light image enhancement</title><source>ScienceDirect Journals</source><creator>Shang, Kai ; Shao, Mingwen ; Wang, Chao ; Qiao, Yuanjian ; Wan, Yecong</creator><creatorcontrib>Shang, Kai ; Shao, Mingwen ; Wang, Chao ; Qiao, Yuanjian ; Wan, Yecong</creatorcontrib><description>Images captured under poor illumination not only struggle to provide satisfactory visual information but also adversely affect high-level visual tasks. Therefore, we delve into low-light image enhancement. We mainly focus on two practical challenges: (1) previous methods predominantly require supervised training with paired data, tending to learn mappings specific to the training data, which limits their generalization ability on unseen images. (2) existing unsupervised methods usually yield sub-optimal image quality due to the insufficient utilization of image priors. To address these challenges, we propose a training-free Prior Guided Diffusion model, namely PGDiff, for zero-reference low-light image enhancement. Specifically, to leverage the implicit information within the degraded image, we propose a frequency-guided mechanism to obtain low-frequency features through bright channel prior, which combined with the generative prior of the pre-trained diffusion model to recover high-frequency details. To improve the quality of generated images, we further introduce the gradient guidance based on image exposure and color priors. Benefiting from this dual-guided mechanism, PGDiff can produce high-quality restoration results without requiring tedious training or paired reference images. Extensive experiments on paired and unpaired datasets show that our training-free method achieves competitive performance against existing learning-based methods, surpassing the state-of-the-art method QuadPrior by 0.25 dB in PSNR on the LOL dataset. •We explore four types of image priors to achieve zero-reference learning•We improve bright channel prior to effectively improve the brightness of image.•A training-free diffusion model guided by priors is meticulously designed.</description><identifier>ISSN: 0925-2312</identifier><identifier>DOI: 10.1016/j.neucom.2024.128974</identifier><language>eng</language><publisher>Elsevier B.V</publisher><subject>Diffusion model ; Low-light image enhancement ; Training-free ; Zero-reference</subject><ispartof>Neurocomputing (Amsterdam), 2025-02, Vol.617, p.128974, Article 128974</ispartof><rights>2024 Elsevier B.V.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c185t-3d4564aaa106bc7f41a98ac29680578a27b3a45dac6ccd065e714d4df6d131793</cites><orcidid>0000-0002-4357-0252 ; 0000-0001-7323-5896</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Shang, Kai</creatorcontrib><creatorcontrib>Shao, Mingwen</creatorcontrib><creatorcontrib>Wang, Chao</creatorcontrib><creatorcontrib>Qiao, Yuanjian</creatorcontrib><creatorcontrib>Wan, Yecong</creatorcontrib><title>Training-free prior guided diffusion model for zero-reference low-light image enhancement</title><title>Neurocomputing (Amsterdam)</title><description>Images captured under poor illumination not only struggle to provide satisfactory visual information but also adversely affect high-level visual tasks. Therefore, we delve into low-light image enhancement. We mainly focus on two practical challenges: (1) previous methods predominantly require supervised training with paired data, tending to learn mappings specific to the training data, which limits their generalization ability on unseen images. (2) existing unsupervised methods usually yield sub-optimal image quality due to the insufficient utilization of image priors. To address these challenges, we propose a training-free Prior Guided Diffusion model, namely PGDiff, for zero-reference low-light image enhancement. Specifically, to leverage the implicit information within the degraded image, we propose a frequency-guided mechanism to obtain low-frequency features through bright channel prior, which combined with the generative prior of the pre-trained diffusion model to recover high-frequency details. To improve the quality of generated images, we further introduce the gradient guidance based on image exposure and color priors. Benefiting from this dual-guided mechanism, PGDiff can produce high-quality restoration results without requiring tedious training or paired reference images. Extensive experiments on paired and unpaired datasets show that our training-free method achieves competitive performance against existing learning-based methods, surpassing the state-of-the-art method QuadPrior by 0.25 dB in PSNR on the LOL dataset. •We explore four types of image priors to achieve zero-reference learning•We improve bright channel prior to effectively improve the brightness of image.•A training-free diffusion model guided by priors is meticulously designed.</description><subject>Diffusion model</subject><subject>Low-light image enhancement</subject><subject>Training-free</subject><subject>Zero-reference</subject><issn>0925-2312</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2025</creationdate><recordtype>article</recordtype><recordid>eNp9kM1KxDAUhbNQcBx9Axd5gdQkTZN2I8jgHwy4GReuQia56WRoG0laRZ_eDnXt6sK5nMM5H0I3jBaMMnl7LAaYbOwLTrkoGK8bJc7Qija8Irxk_AJd5nyklCnGmxV63yUThjC0xCcA_JFCTLidggOHXfB-yiEOuI8OOuzn1w-kSBJ4SDBYwF38Il1oDyMOvWkBw3Aws97DMF6hc2-6DNd_d43eHh92m2eyfX162dxviWV1NZLSiUoKYwyjcm-VF8w0tbG8kTWtVG242pdGVM5Yaa2jsgLFhBPOS8dKpppyjcSSa1PMea6m5xG9Sd-aUX1Coo96QaJPSPSCZLbdLTaYu30GSDrbcNrkQgI7ahfD_wG_P7hvtg</recordid><startdate>20250207</startdate><enddate>20250207</enddate><creator>Shang, Kai</creator><creator>Shao, Mingwen</creator><creator>Wang, Chao</creator><creator>Qiao, Yuanjian</creator><creator>Wan, Yecong</creator><general>Elsevier B.V</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-4357-0252</orcidid><orcidid>https://orcid.org/0000-0001-7323-5896</orcidid></search><sort><creationdate>20250207</creationdate><title>Training-free prior guided diffusion model for zero-reference low-light image enhancement</title><author>Shang, Kai ; Shao, Mingwen ; Wang, Chao ; Qiao, Yuanjian ; Wan, Yecong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c185t-3d4564aaa106bc7f41a98ac29680578a27b3a45dac6ccd065e714d4df6d131793</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2025</creationdate><topic>Diffusion model</topic><topic>Low-light image enhancement</topic><topic>Training-free</topic><topic>Zero-reference</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Shang, Kai</creatorcontrib><creatorcontrib>Shao, Mingwen</creatorcontrib><creatorcontrib>Wang, Chao</creatorcontrib><creatorcontrib>Qiao, Yuanjian</creatorcontrib><creatorcontrib>Wan, Yecong</creatorcontrib><collection>CrossRef</collection><jtitle>Neurocomputing (Amsterdam)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Shang, Kai</au><au>Shao, Mingwen</au><au>Wang, Chao</au><au>Qiao, Yuanjian</au><au>Wan, Yecong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Training-free prior guided diffusion model for zero-reference low-light image enhancement</atitle><jtitle>Neurocomputing (Amsterdam)</jtitle><date>2025-02-07</date><risdate>2025</risdate><volume>617</volume><spage>128974</spage><pages>128974-</pages><artnum>128974</artnum><issn>0925-2312</issn><abstract>Images captured under poor illumination not only struggle to provide satisfactory visual information but also adversely affect high-level visual tasks. Therefore, we delve into low-light image enhancement. We mainly focus on two practical challenges: (1) previous methods predominantly require supervised training with paired data, tending to learn mappings specific to the training data, which limits their generalization ability on unseen images. (2) existing unsupervised methods usually yield sub-optimal image quality due to the insufficient utilization of image priors. To address these challenges, we propose a training-free Prior Guided Diffusion model, namely PGDiff, for zero-reference low-light image enhancement. Specifically, to leverage the implicit information within the degraded image, we propose a frequency-guided mechanism to obtain low-frequency features through bright channel prior, which combined with the generative prior of the pre-trained diffusion model to recover high-frequency details. To improve the quality of generated images, we further introduce the gradient guidance based on image exposure and color priors. Benefiting from this dual-guided mechanism, PGDiff can produce high-quality restoration results without requiring tedious training or paired reference images. Extensive experiments on paired and unpaired datasets show that our training-free method achieves competitive performance against existing learning-based methods, surpassing the state-of-the-art method QuadPrior by 0.25 dB in PSNR on the LOL dataset. •We explore four types of image priors to achieve zero-reference learning•We improve bright channel prior to effectively improve the brightness of image.•A training-free diffusion model guided by priors is meticulously designed.</abstract><pub>Elsevier B.V</pub><doi>10.1016/j.neucom.2024.128974</doi><orcidid>https://orcid.org/0000-0002-4357-0252</orcidid><orcidid>https://orcid.org/0000-0001-7323-5896</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0925-2312
ispartof Neurocomputing (Amsterdam), 2025-02, Vol.617, p.128974, Article 128974
issn 0925-2312
language eng
recordid cdi_crossref_primary_10_1016_j_neucom_2024_128974
source ScienceDirect Journals
subjects Diffusion model
Low-light image enhancement
Training-free
Zero-reference
title Training-free prior guided diffusion model for zero-reference low-light image enhancement
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T00%3A42%3A17IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-elsevier_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Training-free%20prior%20guided%20diffusion%20model%20for%20zero-reference%20low-light%20image%20enhancement&rft.jtitle=Neurocomputing%20(Amsterdam)&rft.au=Shang,%20Kai&rft.date=2025-02-07&rft.volume=617&rft.spage=128974&rft.pages=128974-&rft.artnum=128974&rft.issn=0925-2312&rft_id=info:doi/10.1016/j.neucom.2024.128974&rft_dat=%3Celsevier_cross%3ES0925231224017454%3C/elsevier_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c185t-3d4564aaa106bc7f41a98ac29680578a27b3a45dac6ccd065e714d4df6d131793%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true