Loading…

A Comparison of Feature Representations for Explosive Threat Detection in Ground Penetrating Radar Data

The automatic detection of buried threats in ground penetrating radar (GPR) data is an active area of research due to GPR's ability to detect both metal and nonmetal subsurface objects. Recent work on algorithms designed to distinguish between threats and nonthreats in GPR data has utilized com...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on geoscience and remote sensing 2017-12, Vol.55 (12), p.6736-6745
Main Authors: Sakaguchi, Rayn, Morton, Kenneth D., Collins, Leslie M., Torrione, Peter A.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The automatic detection of buried threats in ground penetrating radar (GPR) data is an active area of research due to GPR's ability to detect both metal and nonmetal subsurface objects. Recent work on algorithms designed to distinguish between threats and nonthreats in GPR data has utilized computer vision methods to advance the state-of-the-art detection and discrimination performance. Feature extractors, or descriptors, from the computer vision literature have exhibited excellent performance in representing 2-D GPR image patches and allow for robust classification of threats from nonthreats. This paper aims to perform a broad study of feature extraction methods in order to identify characteristics that lead to improved classification performance under controlled conditions. The results presented in this paper show that gradient-based features, such as the edge histogram descriptor and the scale invariant feature transform, provide the most robust performance across a large and varied data set. These results indicate that various techniques from the computer vision literature can be successfully applied to target detection in GPR data and that more advanced techniques from the computer vision literature may provide further performance improvements.
ISSN:0196-2892
1558-0644
DOI:10.1109/TGRS.2017.2732226