Loading…

On the Efficient Implementation of Sparse Bayesian Learning-Based STAP Algorithms

Sparse Bayesian learning-based space–time adaptive processing (SBL-STAP) algorithms can achieve superior clutter suppression performance with limited training sample support in practical heterogeneous and non-stationary clutter environments. However, when the system has high degrees of freedom (DOFs...

Full description

Saved in:
Bibliographic Details
Published in:Remote sensing (Basel, Switzerland) Switzerland), 2022-08, Vol.14 (16), p.3931
Main Authors: Liu, Kun, Wang, Tong, Wu, Jianxin, Liu, Cheng, Cui, Weichen
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Sparse Bayesian learning-based space–time adaptive processing (SBL-STAP) algorithms can achieve superior clutter suppression performance with limited training sample support in practical heterogeneous and non-stationary clutter environments. However, when the system has high degrees of freedom (DOFs), SBL-STAP algorithms suffer from high computational complexity, since the large-scale matrix calculations and the inversion operations of large-scale covariance matrices are involved in the iterative process. In this article, we consider a computationally efficient implementation for SBL-STAP algorithms. The efficient implementation is based on the fact that the covariance matrices that need to be updated in the iterative process of the SBL-STAP algorithms have a Hermitian Toplitz-block-Toeplitz (HTBT) structure, with the result being that the inverse covariance matrix can be expressed in closed form by using a special case of the Gohberg–Semencul (G-S) formula. Based on the G-S-type factorization of the inverse covariance matrix and the structure of the used dictionary matrix, we can perform almost all operations in the SBL-STAP algorithms by 2-D FFT/IFFT. As a result, compared with the original SBL-STAP algorithms, even for moderate data sizes, the proposed algorithms can directly reduce the computational load by about two orders of magnitudes without any performance loss. Finally, simulation results validate the effectiveness of the proposed algorithms.
ISSN:2072-4292
2072-4292
DOI:10.3390/rs14163931