Loading…

Hybrid projection methods for large-scale inverse problems with mixed Gaussian priors

When solving ill-posed inverse problems, a good choice of the prior is critical for the computation of a reasonable solution. A common approach is to include a Gaussian prior, which is defined by a mean vector and a symmetric and positive definite covariance matrix, and to use iterative projection m...

Full description

Saved in:
Bibliographic Details
Published in:Inverse problems 2021-04, Vol.37 (4), p.44002
Main Authors: Cho, Taewon, Chung, Julianne, Jiang, Jiahua
Format: Article
Language:English
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:When solving ill-posed inverse problems, a good choice of the prior is critical for the computation of a reasonable solution. A common approach is to include a Gaussian prior, which is defined by a mean vector and a symmetric and positive definite covariance matrix, and to use iterative projection methods to solve the corresponding regularized problem. However, a main challenge for many of these iterative methods is that the prior covariance matrix must be known and fixed (up to a constant) before starting the solution process. In this paper, we develop hybrid projection methods for inverse problems with mixed Gaussian priors where the prior covariance matrix is a convex combination of matrices and the mixing parameter and the regularization parameter do not need to be known in advance. Such scenarios may arise when data is used to generate a sample prior covariance matrix (e.g., in data assimilation) or when different priors are needed to capture different qualities of the solution. The proposed hybrid methods are based on a mixed Golub–Kahan process, which is an extension of the generalized Golub–Kahan bidiagonalization, and a distinctive feature of the proposed approach is that both the regularization parameter and the weighting parameter for the covariance matrix can be estimated automatically during the iterative process. Furthermore, for problems where training data are available, various data-driven covariance matrices (including those based on learned covariance kernels) can be easily incorporated. Numerical examples from tomographic reconstruction demonstrate the potential for these methods.
ISSN:0266-5611
1361-6420
DOI:10.1088/1361-6420/abd29d