Loading…
Few-Shot Image Generation via Style Adaptation and Content Preservation
Training a generative model with limited data (e.g., 10) is a very challenging task. Many works propose to fine-tune a pretrained GAN model. However, this can easily result in overfitting. In other words, they manage to adapt the style but fail to preserve the content, where style denotes the specif...
Saved in:
Published in: | IEEE transaction on neural networks and learning systems 2024-11, Vol.PP, p.1-12 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Training a generative model with limited data (e.g., 10) is a very challenging task. Many works propose to fine-tune a pretrained GAN model. However, this can easily result in overfitting. In other words, they manage to adapt the style but fail to preserve the content, where style denotes the specific properties that define a domain while content denotes the domain-irrelevant information that represents diversity. Recent works try to maintain a predefined correspondence to preserve the content, however, the diversity is still not enough and it may affect style adaptation. In this work, we propose a paired image reconstruction approach for content preservation. We propose to introduce an image translation module to GAN transferring, where the module teaches the generator to separate style and content, and the generator provides training data to the translation module in return. Qualitative and quantitative experiments show that our method consistently surpasses the state-of-the-art methods in a few-shot setting. |
---|---|
ISSN: | 2162-237X 2162-2388 2162-2388 |
DOI: | 10.1109/TNNLS.2024.3477467 |