Loading…
Sample generation based on a supervised Wasserstein Generative Adversarial Network for high-resolution remote-sensing scene classification
As high-resolution remote-sensing (HRRS) images have become increasingly widely available, scene classification focusing on the smart classification of land cover and land use has also attracted more attention. However, mainstream methods encounter a severe problem in that many annotation samples ar...
Saved in:
Published in: | Information sciences 2020-10, Vol.539, p.177-194 |
---|---|
Main Authors: | , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | As high-resolution remote-sensing (HRRS) images have become increasingly widely available, scene classification focusing on the smart classification of land cover and land use has also attracted more attention. However, mainstream methods encounter a severe problem in that many annotation samples are required to obtain an ideal model for scene classification. In the remote sensing community, there is no dataset with a comparative scale to ImageNet (which contains over 14 million images) to meet the sample requirements of the convolutional neural network (CNN)-based methods. In addition, labeling new images is both labor intensive and time consuming. To address these problems, we present a new generative adversarial network (GAN)-based remote-sensing image generation method (GAN-RSIGM) that can be applied to create high-resolution annotated samples for scene classification. In GAN-RSIGM, the Wasserstein distance is used to measure the difference between the generator distribution and the real data distribution. This addresses the problem of the gradient disappearing during sample generation, and distinctly promotes a generator distribution close to the real data distribution. An auxiliary classifier is added to the discriminator, guiding the generator to produce consistent and distinct images. With regard to the network structure, the discriminator and the generator are implemented by stacking residual blocks, which further stabilize the training process of the GAN-RSIGM. Extensive experiments were conducted to evaluate the proposed method with two public HRRS datasets. The experimental results demonstrated that the proposed method could achieve satisfactory performance for high-quality annotation sample generation, scene classification, and data augmentation. |
---|---|
ISSN: | 0020-0255 1872-6291 |
DOI: | 10.1016/j.ins.2020.06.018 |