Loading…

Improved Few-Shot SAR Image Generation by Enhancing Diversity

Due to their remarkable capabilities of generation, deep-learning-based (DL) generative models have been widely applied in the field of synthetic aperture radar (SAR) image synthesis. This kind of data-driven DL methods usually requires abundant training samples to guarantee the performance. However...

Full description

Saved in:
Bibliographic Details
Published in:IEEE journal of selected topics in applied earth observations and remote sensing 2024, Vol.17, p.3394-3408
Main Authors: Bao, Jianghan, Yu, Wen Ming, Yang, Kaiqiao, Liu, Che, Cui, Tie Jun
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Due to their remarkable capabilities of generation, deep-learning-based (DL) generative models have been widely applied in the field of synthetic aperture radar (SAR) image synthesis. This kind of data-driven DL methods usually requires abundant training samples to guarantee the performance. However, the number of SAR images for training is often insufficient because of expensive acquisitions. This typical few-shot image generation (FSIG) task still remains not fully investigated. In this article, we propose an optical-to-SAR (O2S) image translation model with a pairwise distance (PD) loss to enhance the diversity of generation. First, we replace the semantic maps used as the input of network in previous studies with more easily available optical images and apply the popular pix2pix model in image-to-image translation tasks as the foundation network. Second, inspired by the FSIG works in the traditional computer vision field, we propose a similarity preservation term in the loss function, which encourages the generated images to inherit the similarity relationship of the corresponding simulated SAR images. Third, the data augmentation experiments on the MSTAR dataset indicates the effectiveness of our model. With only five samples for each target category and six categories in total, the basic O2S network boosts the classification accuracy by 4.81% and 2.27% for data of depression angle of 15° and 17°, respectively. The PD loss is capable of bringing additional 2.23% and 1.78% improvement. The investigation on similarity curves also suggests that the generated images enhanced by the PD loss have closer similarity behaviors to the real SAR images.
ISSN:1939-1404
2151-1535
DOI:10.1109/JSTARS.2024.3352237