Loading…

Performance metrics for models designed to predict treatment effect

Measuring the performance of models that predict individualized treatment effect is challenging because the outcomes of two alternative treatments are inherently unobservable in one patient. The C-for-benefit was proposed to measure discriminative ability. However, measures of calibration and overal...

Full description

Saved in:
Bibliographic Details
Published in:BMC medical research methodology 2023-07, Vol.23 (1), p.165-165, Article 165
Main Authors: Maas, C C H M, Kent, D M, Hughes, M C, Dekker, R, Lingsma, H F, van Klaveren, D
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Measuring the performance of models that predict individualized treatment effect is challenging because the outcomes of two alternative treatments are inherently unobservable in one patient. The C-for-benefit was proposed to measure discriminative ability. However, measures of calibration and overall performance are still lacking. We aimed to propose metrics of calibration and overall performance for models predicting treatment effect in randomized clinical trials (RCTs). Similar to the previously proposed C-for-benefit, we defined observed pairwise treatment effect as the difference between outcomes in pairs of matched patients with different treatment assignment. We match each untreated patient with the nearest treated patient based on the Mahalanobis distance between patient characteristics. Then, we define the E -for-benefit, E -for-benefit, and E -for-benefit as the average, median, and 90 quantile of the absolute distance between the predicted pairwise treatment effects and local-regression-smoothed observed pairwise treatment effects. Furthermore, we define the cross-entropy-for-benefit and Brier-for-benefit as the logarithmic and average squared distance between predicted and observed pairwise treatment effects. In a simulation study, the metric values of deliberately "perturbed models" were compared to those of the data-generating model, i.e., "optimal model". To illustrate these performance metrics, different modeling approaches for predicting treatment effect are applied to the data of the Diabetes Prevention Program: 1) a risk modelling approach with restricted cubic splines; 2) an effect modelling approach including penalized treatment interactions; and 3) the causal forest. As desired, performance metric values of "perturbed models" were consistently worse than those of the "optimal model" (E -for-benefit ≥ 0.043 versus 0.002, E -for-benefit ≥ 0.032 versus 0.001, E -for-benefit ≥ 0.084 versus 0.004, cross-entropy-for-benefit ≥ 0.765 versus 0.750, Brier-for-benefit ≥ 0.220 versus 0.218). Calibration, discriminative ability, and overall performance of three different models were similar in the case study. The proposed metrics were implemented in a publicly available R-package "HTEPredictionMetrics". The proposed metrics are useful to assess the calibration and overall performance of models predicting treatment effect in RCTs.
ISSN:1471-2288
1471-2288
DOI:10.1186/s12874-023-01974-w