Loading…
Learning a Coordinated Network for Detail-Refinement Multiexposure Image Fusion
Nowadays, deep learning has made rapid progress in the field of multi-exposure image fusion. However, it is still challenging to extract available features while retaining texture details and color. To address this difficult issue, in this paper, we propose a coordinated learning network for detail-...
Saved in:
Published in: | IEEE transactions on circuits and systems for video technology 2023-02, Vol.33 (2), p.713-727 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Nowadays, deep learning has made rapid progress in the field of multi-exposure image fusion. However, it is still challenging to extract available features while retaining texture details and color. To address this difficult issue, in this paper, we propose a coordinated learning network for detail-refinement in an end-to-end manner. Firstly, we obtain shallow feature maps from extreme over/under-exposed source images by a collaborative extraction module. Secondly, smooth attention weight maps are generated under the guidance of a self-attention module, which can draw a global connection to correlate patches in different locations. With the cooperation of the two aforementioned used modules, our proposed network can obtain a coarse fused image. Moreover, by assisting with an edge revision module, edge details of fused results are refined and noise is suppressed effectively. We conduct subjective qualitative and objective quantitative comparisons between the proposed method and twelve state-of-the-art methods on two available public datasets, respectively. The results show that our fused images significantly outperform others in visual effects and evaluation metrics. In addition, we also perform ablation experiments to verify the function and effectiveness of each module in our proposed method. The source code can be achieved at https://github.com/lok-18/LCNDR . |
---|---|
ISSN: | 1051-8215 1558-2205 |
DOI: | 10.1109/TCSVT.2022.3202692 |