Loading…

Multiple Task-Oriented Encoders for Unified Image Fusion

Image fusion methods have achieved incredible progress, but they are vulnerable to handling a certain type of fusion task rather than considering deeper relations between cross-realm task correlations. To achieve this, we integrate different image fusion tasks into a unified network. Our method is a...

Full description

Saved in:
Bibliographic Details
Main Authors: Li, Zhuoxiao, Liu, Jinyuan, Liu, Risheng, Fan, Xin, Luo, Zhongxuan, Gao, Wen
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Image fusion methods have achieved incredible progress, but they are vulnerable to handling a certain type of fusion task rather than considering deeper relations between cross-realm task correlations. To achieve this, we integrate different image fusion tasks into a unified network. Our method is accomplished through multiple task-oriented encoders and a generic decoder, in addition to a self-adapting loss function. The taskoriented encoders are trained to learn task-specific features, while the generic decoder reconstructs the fused features to generate a comprehensive image. Subsequently, by introducing the self-adapting loss in our method, it can automatically adjust itself to source data characteristics on different tasks. Besides, we formulate a training strategy based on bilevel optimization to update the multi-encoder and generic decoder in an alternative manner. Extensive experimental results demonstrate the superior performance of our method over the stateof-the-art methods.
ISSN:1945-788X
DOI:10.1109/ICME51207.2021.9428212