Loading…

Feature fusion for inverse synthetic aperture radar image classification via learning shared hidden space

Multi‐sensor fusion recognition is a meaningful task in ISAR image recognition. Compared with a single sensor, multi‐sensor fusion can provide richer target information, which is conducive to more accurate and robust identification. However, previous deep learning‐based fusion methods do not effecti...

Full description

Saved in:
Bibliographic Details
Published in:Electronics letters 2021-12, Vol.57 (25), p.986-988
Main Authors: Lin, Wenhao, Gao, Xunzhang
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Multi‐sensor fusion recognition is a meaningful task in ISAR image recognition. Compared with a single sensor, multi‐sensor fusion can provide richer target information, which is conducive to more accurate and robust identification. However, previous deep learning‐based fusion methods do not effectively deal with the redundancy and complementarity of information between different sources. In this letter, we construct a shared hidden space to align features from different sources. Accordingly, we design an end‐to‐end deep fusion framework to fuse dual ISAR images at the feature level. For combining the multi‐source information, deep generalised canonical correlation analysis (DGCCA) is used as the loss item to map features extracted from dual input onto the shared hidden space. Moreover, we propose an efficient and lightweight spatial attention module, named united attention module, which can be embedded between dual‐stream convolutional neural networks (CNNs) to promote DGCCA optimisation by information interaction. Compared with other deep fusion frameworks, our model obtains the competitive performance in ISAR image fusion for classification.
ISSN:0013-5194
1350-911X
DOI:10.1049/ell2.12311