Loading…
One-sample log-rank tests with consideration of reference curve sampling variability
The one-sample log-rank test is the method of choice for single-arm Phase II trials with time-to-event endpoint. It allows to compare the survival of the patients to a reference survival curve that typically represents the expected survival under standard of care. The classical one-sample log-rank t...
Saved in:
Published in: | arXiv.org 2021-09 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The one-sample log-rank test is the method of choice for single-arm Phase II trials with time-to-event endpoint. It allows to compare the survival of the patients to a reference survival curve that typically represents the expected survival under standard of care. The classical one-sample log-rank test, however, assumes that the reference survival curve is deterministic. This ignores that the reference curve is commonly estimated from historic data and thus prone to statistical error. Ignoring sampling variability of the reference curve results in type I error rate inflation. For that reason, a new one-sample log-rank test is proposed that explicitly accounts for the statistical error made in the process of estimating the reference survival curve. The test statistic and its distributional properties are derived using martingale techniques in the large sample limit. In particular, a sample size formula is provided. Small sample properties regarding type I and type II error rate control are studied by simulation. A case study is conducted to study the influence of several design parameters of a single-armed trial on the inflation of the type I error rate when reference curve sampling variability is ignored. |
---|---|
ISSN: | 2331-8422 |
DOI: | 10.48550/arxiv.2109.02315 |