Loading…
Fusion of Hyperspectral and LiDAR Data for Classification of Cloud-Shadow Mixed Remote Sensed Scene
Recent advances in sensor design allow us to gather more useful information about the Earth's surface. Examples are hyperspectral (HS) and Light Detection And Ranging (LiDAR) sensors. These, however, have limitations. HS data cannot distinguish different objects made from similar materials and...
Saved in:
Published in: | IEEE journal of selected topics in applied earth observations and remote sensing 2017-08, Vol.10 (8), p.3768-3781 |
---|---|
Main Authors: | , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Recent advances in sensor design allow us to gather more useful information about the Earth's surface. Examples are hyperspectral (HS) and Light Detection And Ranging (LiDAR) sensors. These, however, have limitations. HS data cannot distinguish different objects made from similar materials and highly suffers from cloud-shadow regions, whereas LiDAR cannot separate distinct objects that are at the same altitude. For an increased classification performance, fusion of HS and LiDAR data recently attracted interest but remains challenging. In particular, these methods suffer from a poor performance in cloud-shadow regions because of the lack of correspondence with shadow-free regions and insufficient training data. In this paper, we propose a new framework to fuse HS and LiDAR data for the classification of remote sensing scenes mixed with cloud-shadow. We process the cloud-shadow and shadow-free regions separately, our main contribution is the development of a novel method to generate reliable training samples in the cloud-shadow regions. Classification is performed separately in the shadow-free (classifier is trained by the available training samples) and cloud-shadow regions (classifier is trained by our generated training samples) by integrating spectral (i.e., original HS image), spatial (morphological features computed on HS image) and elevation (morphological features computed on LiDAR) features. The final classification map is obtained by fusing the results of the shadow-free and cloud-shadow regions. Experimental results on a real HS and LiDAR dataset demonstrate the effectiveness of the proposed method, as the proposed framework improves the overall classification accuracy with 4% for whole scene and 10% for shadow-free regions over the other methods. |
---|---|
ISSN: | 1939-1404 2151-1535 |
DOI: | 10.1109/JSTARS.2017.2684085 |