Loading…
Divide and conquer: Ill-light image enhancement via hybrid deep network
[Display omitted] •Low and ill-light image enhancement.•Lowlight mage enhancement without paired training data supervision.•Image enhancement with a few-shots of training data.•Deep hybrid learning, independent of the type of training and test data.•First Large scale dataset for ill-lighting conditi...
Saved in:
Published in: | Expert systems with applications 2021-11, Vol.182, p.115034, Article 115034 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | [Display omitted]
•Low and ill-light image enhancement.•Lowlight mage enhancement without paired training data supervision.•Image enhancement with a few-shots of training data.•Deep hybrid learning, independent of the type of training and test data.•First Large scale dataset for ill-lighting conditions.
Intelligent system applications in computer vision suffer detection and identification problems in ill lighting conditions (i.e., non-uniform illumination), where under-exposed and over-exposed regions coexist in the captured images. Processing on these images results in over and under enhancement with colour and contrast distortions. The traditional methods design some handcrafted constraints and rely on image pairs and priors, whereas existing deep learning-based methods rely on large scale and even paired training data. But these method’s capacity is limited to specific scenes (i.e., lighting conditions). In this paper, we present a deep-hybrid ill-light image enhancement method and propose a contrast enhancement strategy based on the decomposition of the input images into reflection J and illumination T. A Divide to Glitter network (D2G-Net) is designed to learn from the few-shots of training samples and do not require paired and large quantity training data. D2G-Net is comprised of a multilayer Division-Net for image division and a Glitter-Net to amplify the illumination map. We propose to regularize learning using a correlation consistency of decomposition extracted from the input data itself. Extensive experiments are organized under ill-lighting conditions, where a new test dataset is also proposed with robust lighting variation to evaluate the performance of the proposed method. Experimental results prove that our method has superior performance for preserving structural and texture details compared to state-of-the-art approaches, which suggests that our method is more practical in interactive computer vision and intelligent expert system applications. |
---|---|
ISSN: | 0957-4174 1873-6793 |
DOI: | 10.1016/j.eswa.2021.115034 |