Loading…
Repeatability and reproducibility of various 4D Flow MRI postprocessing software programs in a multi-software and multi-vendor cross-over comparison study
Different software programs are available for the evaluation of 4D Flow cardiovascular magnetic resonance (CMR). A good agreement of the results between programs is a prerequisite for the acceptance of the method. Therefore, the goal was to compare quantitative results from a cross-over comparison i...
Saved in:
Published in: | Journal of cardiovascular magnetic resonance 2023-03, Vol.25 (1), p.22-22, Article 22 |
---|---|
Main Authors: | , , , , , , , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Different software programs are available for the evaluation of 4D Flow cardiovascular magnetic resonance (CMR). A good agreement of the results between programs is a prerequisite for the acceptance of the method. Therefore, the goal was to compare quantitative results from a cross-over comparison in individuals examined on two scanners of different vendors analyzed with four postprocessing software packages.
Eight healthy subjects (27 ± 3 years, 3 women) were each examined on two 3T CMR systems (Ingenia, Philips Healthcare; MAGNETOM Skyra, Siemens Healthineers) with a standardized 4D Flow CMR sequence. Six manually placed aortic contours were evaluated with Caas (Pie Medical Imaging, SW-A), cvi42 (Circle Cardiovascular Imaging, SW-B), GTFlow (GyroTools, SW-C), and MevisFlow (Fraunhofer Institute MEVIS, SW-D) to analyze seven clinically used parameters including stroke volume, peak flow, peak velocity, and area as well as typically scientifically used wall shear stress values. Statistical analysis of inter- and intrareader variability, inter-software and inter-scanner comparison included calculation of absolute and relative error (ER), intraclass correlation coefficient (ICC), Bland–Altman analysis, and equivalence testing based on the assumption that inter-software differences needed to be within 80% of the range of intrareader differences.
SW-A and SW-C were the only software programs showing agreement for stroke volume (ICC = 0.96; ER = 3 ± 8%), peak flow (ICC: 0.97; ER = −1 ± 7%), and area (ICC = 0.81; ER = 2 ± 22%). Results from SW-A/D and SW-C/D were equivalent only for area and peak flow. Other software pairs did not yield equivalent results for routinely used clinical parameters. Especially peak maximum velocity yielded poor agreement (ICC ≤ 0.4) between all software packages except SW-A/D that showed good agreement (ICC = 0.80). Inter- and intrareader consistency for clinically used parameters was best for SW-A and SW-D (ICC = 0.56–97) and worst for SW-B (ICC = -0.01–0.71). Of note, inter-scanner differences per individual tended to be smaller than inter-software differences.
Of all tested software programs, only SW-A and SW-C can be used equivalently for determination of stroke volume, peak flow, and vessel area. Irrespective of the applied software and scanner, high intra- and interreader variability for all parameters have to be taken into account before introducing 4D Flow CMR in clinical routine. Especially in multicenter clinical trials a si |
---|---|
ISSN: | 1097-6647 1532-429X |
DOI: | 10.1186/s12968-023-00921-4 |