Loading…

Characterizing lognormal fractional-Brownian-motion density fields with a convolutional neural network

ABSTRACT In attempting to quantify statistically the density structure of the interstellar medium, astronomers have considered a variety of fractal models. Here, we argue that, to properly characterize a fractal model, one needs to define precisely the algorithm used to generate the density field, a...

Full description

Saved in:
Bibliographic Details
Published in:Monthly notices of the Royal Astronomical Society 2020-03, Vol.493 (1), p.161-170
Main Authors: Bates, M L, Whitworth, A P, Lomax, O D
Format: Article
Language:English
Citations: Items that this one cites
Items that cite this one
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:ABSTRACT In attempting to quantify statistically the density structure of the interstellar medium, astronomers have considered a variety of fractal models. Here, we argue that, to properly characterize a fractal model, one needs to define precisely the algorithm used to generate the density field, and to specify – at least – three parameters: one parameter constrains the spatial structure of the field, one parameter constrains the density contrast between structures on different scales, and one parameter constrains the dynamic range of spatial scales over which self-similarity is expected (either due to physical considerations, or due to the limitations of the observational or numerical technique generating the input data). A realistic fractal field must also be noisy and non-periodic. We illustrate this with the exponentiated fractional Brownian motion (xfBm) algorithm, which is popular because it delivers an approximately lognormal density field, and for which the three parameters are, respectively, the power spectrum exponent, β, the exponentiating factor, ${\cal S}$, and the dynamic range, ${\cal R}$. We then explore and compare two approaches that might be used to estimate these parameters: machine learning and the established Δ-Variance procedure. We show that for 2 ≤ β ≤ 4 and $0\le {\cal S}\le 3$, a suitably trained Convolutional Neural Network is able to estimate objectively both β (with root-mean-square error $\epsilon _{_\beta }\sim 0.12$) and ${\cal S}$ (with $\epsilon _{_{\cal S}}\sim 0.29$). Δ-variance is also able to estimate β, albeit with a somewhat larger error ($\epsilon _{_\beta }\sim 0.17$) and with some human intervention, but is not able to estimate ${\cal S}$.
ISSN:0035-8711
1365-2966
DOI:10.1093/mnras/staa122