Loading…

Posterior Variance Predictions in Sparse Bayesian Learning under Approximate Inference Techniques

Sparse Bayesian Learning (SBL), initially proposed in the Machine Learning (ML) literature, is an efficient and well-studied framework for sparse signal recovery. SBL uses hierarchical Bayes with a decorrelated Gaussian prior in which the variance profile is also to be estimated. This is more sparsi...

Full description

Saved in:
Bibliographic Details
Main Authors: Thomas, Christo Kurisummoottil, Slock, Dirk
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Sparse Bayesian Learning (SBL), initially proposed in the Machine Learning (ML) literature, is an efficient and well-studied framework for sparse signal recovery. SBL uses hierarchical Bayes with a decorrelated Gaussian prior in which the variance profile is also to be estimated. This is more sparsity inducing than e.g. a Laplacian prior. However, SBL does not scale with problem dimensions due to the computational complexity associated with the matrix inversion in Linear Mimimum Mean Squared Error (LMMSE) estimation. To address this issue, various low complexity approximate Bayesian inference techniques have been introduced for the LMMSE component, including Variational Bayesian (VB) inference, Space Alternating Variational Estimation (SAVE) or Message Passing (MP) algorithms such as Belief Propagation (BP) or Expectation Propagation (EP) or Approximate MP (AMP). These algorithms may converge to the correct LMMSE estimate. However, in ML we are often also interested in having posterior variance information. SBL via BP or SAVE provides (largely) underestimated variance estimates. AMP style algorithms may provide more accurate variance information. The State Evolution analysis may show convergence of the (sum) MSE to the MMSE value. But we are interested also in the MSE of the individual components. To this end, utilizing the random matrix theory results, we show that in the large system limit, under i.i.d. entries in the measurement matrix, the per component MSE predicted by BP or xAMP converges to the Bayes optimal value.
ISSN:2576-2303
DOI:10.1109/IEEECONF51394.2020.9443473