Loading…

Asymmetric slack contrastive learning for full use of feature information in image translation

Recently, contrastive learning has been proven to be powerful in cross-domain feature learning and has been widely used in image translation tasks. However, these methods often overlook the differences between positive and negative samples regarding model optimization ability and treat them equally....

Full description

Saved in:
Bibliographic Details
Published in:Knowledge-based systems 2024-09, Vol.299, p.112136, Article 112136
Main Authors: Zhang, Yusen, Li, Min, Gou, Yao, He, Yujie
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Recently, contrastive learning has been proven to be powerful in cross-domain feature learning and has been widely used in image translation tasks. However, these methods often overlook the differences between positive and negative samples regarding model optimization ability and treat them equally. This weakens the feature representation ability of the generative models. In this paper, we propose a novel image translation model based on asymmetric slack contrastive learning. We design a new contrastive loss asymmetrically by introducing a slack adjustment factor. Theoretical analysis shows that it can adaptively optimize and adjust according to different positive and negative samples and significantly improve optimization efficiency. In addition, to better preserve local structural relationships during image translation, we constructed a regional differential structural consistency correction block using differential vectors. Comparative experiments were conducted using seven existing methods on five datasets. The results indicate that our method can maintain structural consistency between cross-domain images at a deeper level. Furthermore, it is more effective in establishing real image-domain mapping relations, resulting in higher-quality images being generated. •Image translation tasks based on contrastive learning often ignore the difference between different positive/negative samples.•Our asymmetric slack contrastive learning adaptively improves the optimization efficiency of contrastive loss.•The local consistency can improve the global consistency of the generated image.•Preserving differential structural consistency can help eliminate local distortions in image translation.
ISSN:0950-7051
1872-7409
DOI:10.1016/j.knosys.2024.112136