Loading…

Comparison of Bayesian and frequentist monitoring boundaries motivated by the Multiplatform Randomized Clinical Trial

The coronavirus disease 2019 pandemic highlighted the need to conduct efficient randomized clinical trials with interim monitoring guidelines for efficacy and futility. Several randomized coronavirus disease 2019 trials, including the Multiplatform Randomized Clinical Trial (mpRCT), used Bayesian gu...

Full description

Saved in:
Bibliographic Details
Published in:Clinical trials (London, England) England), 2024-12, Vol.21 (6), p.701-709
Main Authors: Joo, Jungnam, Leifer, Eric S, Proschan, Michael A, Troendle, James F, Reynolds, Harmony R, Hade, Erinn A, Lawler, Patrick R, Kim, Dong-Yun, Geller, Nancy L
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The coronavirus disease 2019 pandemic highlighted the need to conduct efficient randomized clinical trials with interim monitoring guidelines for efficacy and futility. Several randomized coronavirus disease 2019 trials, including the Multiplatform Randomized Clinical Trial (mpRCT), used Bayesian guidelines with the belief that they would lead to quicker efficacy or futility decisions than traditional "frequentist" guidelines, such as spending functions and conditional power. We explore this belief using an intuitive interpretation of Bayesian methods as translating prior opinion about the treatment effect into imaginary prior data. These imaginary observations are then combined with actual observations from the trial to make conclusions. Using this approach, we show that the Bayesian efficacy boundary used in mpRCT is actually quite similar to the frequentist Pocock boundary. The mpRCT's efficacy monitoring guideline considered stopping if, given the observed data, there was greater than 99% probability that the treatment was effective (odds ratio greater than 1). The mpRCT's futility monitoring guideline considered stopping if, given the observed data, there was greater than 95% probability that the treatment was less than 20% effective (odds ratio less than 1.2). The mpRCT used a normal prior distribution that can be thought of as supplementing the actual patients' data with imaginary patients' data. We explore the effects of varying probability thresholds and the prior-to-actual patient ratio in the mpRCT and compare the resulting Bayesian efficacy monitoring guidelines to the well-known frequentist Pocock and O'Brien-Fleming efficacy guidelines. We also contrast Bayesian futility guidelines with a more traditional 20% conditional power futility guideline. A Bayesian efficacy and futility monitoring boundary using a neutral, weakly informative prior distribution and a fixed probability threshold at all interim analyses is more aggressive than the commonly used O'Brien-Fleming efficacy boundary coupled with a 20% conditional power threshold for futility. The trade-off is that more aggressive boundaries tend to stop trials earlier, but incur a loss of power. Interestingly, the Bayesian efficacy boundary with 99% probability threshold is very similar to the classic Pocock efficacy boundary. In a pandemic where quickly weeding out ineffective treatments and identifying effective treatments is paramount, aggressive monitoring may be preferred to conservativ
ISSN:1740-7745
1740-7753
1740-7753
DOI:10.1177/17407745241244801