Loading…
Enhancing synthetic training data for quantitative photoacoustic tomography with generative deep learning
Multiwavelength photoacoustic images encode information about a tissue's optical absorption distribution. This can be used to estimate its blood oxygen saturation distribution (sO2), an important physiological indicator of tissue health and pathology. However the wavelength dependence of the li...
Saved in:
Published in: | arXiv.org 2023-05 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Multiwavelength photoacoustic images encode information about a tissue's optical absorption distribution. This can be used to estimate its blood oxygen saturation distribution (sO2), an important physiological indicator of tissue health and pathology. However the wavelength dependence of the light fluence distribution complicates the recovery of accurate estimates, in particular, preventing the use of a straightforward spectroscopic inversion. Deep learning approaches have been shown effective at producing accurate estimates of sO2 from simulated data. Though, the translation of generic supervised learning approaches to real tissues is prevented by the lack of real `paired' training data (multiwavelength PA images of in vivo tissues with their corresponding sO2 distributions). Here, we discuss i) why networks trained on images simulated using conventional means are unlikely to generalise their performance on real tissues, and ii) the prospects of using two generative adversarial network based strategies to improve the generalisability of sO2-estimating networks trained on synthetic data: a) CycleGAN-driven unsupervised domain adaptation of conventionally simulated images, and b) the generation of paired training data using AmbientGANs. |
---|---|
ISSN: | 2331-8422 |