Loading…

Statistical inversion and Monte Carlo sampling methods in electrical impedance tomography

This paper discusses the electrical impedance tomography (EIT) problem: electric currents are injected into a body with unknown electromagnetic properties through a set of contact electrodes. The corresponding voltages that are needed to maintain these currents are measured. The objective is to esti...

Full description

Saved in:
Bibliographic Details
Published in:Inverse problems 2000-10, Vol.16 (5), p.1487-1522
Main Authors: Kaipio, Jari P, Kolehmainen, Ville, Somersalo, Erkki, Vauhkonen, Marko
Format: Article
Language:English
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper discusses the electrical impedance tomography (EIT) problem: electric currents are injected into a body with unknown electromagnetic properties through a set of contact electrodes. The corresponding voltages that are needed to maintain these currents are measured. The objective is to estimate the unknown resistivity, or more generally the impedivity distribution of the body based on this information. The most commonly used method to tackle this problem in practice is to use gradient-based local linearizations. We give a proof for the differentiability of the electrode boundary data with respect to the resistivity distribution and the contact impedances. Due to the ill-posedness of the problem, regularization has to be employed. In this paper, we consider the EIT problem in the framework of Bayesian statistics, where the inverse problem is recast into a form of statistical inference. The problem is to estimate the posterior distribution of the unknown parameters conditioned on measurement data. From the posterior density, various estimates for the resistivity distribution can be calculated as well as a posteriori uncertainties. The search of the maximum a posteriori estimate is typically an optimization problem, while the conditional expectation is computed by integrating the variable with respect to the posterior probability distribution. In practice, especially when the dimension of the parameter space is large, this integration must be done by Monte Carlo methods such as the Markov chain Monte Carlo (MCMC) integration. These methods can also be used for calculation of a posteriori uncertainties for the estimators. In this paper, we concentrate on MCMC integration methods. In particular, we demonstrate by numerical examples the statistical approach when the prior densities are nondifferentiable, such as the prior penalizing the total variation or the L super(1) norm of the resistivity .
ISSN:0266-5611
1361-6420
DOI:10.1088/0266-5611/16/5/321