1. Calculation of Sample Size for Stroke Trials Assessing Functional Outcome: Comparison of Binary and Ordinal Approaches
- Author
-
Joanna Wardlaw, Philippa Logan, Philip Bath, and Laura Gray
- Subjects
Ordinal data ,Barthel index ,business.industry ,Scale (descriptive set theory) ,medicine.disease ,Outcome (probability) ,law.invention ,Neurology ,Randomized controlled trial ,Sample size determination ,law ,Statistics ,Medicine ,Outcome data ,business ,Stroke - Abstract
Background Many acute stroke trials have given neutral results. Sub-optimal statistical analyses may be failing to detect efficacy. Methods which take account of the ordinal nature of functional outcome data are more efficient. We compare sample size calculations for dichotomous and ordinal outcomes for use in stroke trials. Methods Data from stroke trials studying the effects of interventions known to positively or negatively alter functional outcome – Rankin Scale and Barthel Index – were assessed. Sample size was calculated using comparisons of proportions, means, medians (according to Payne), and ordinal data (according to Whitehead). The sample sizes gained from each method were compared using Friedman 2 way ANOVA. Results Fifty-five comparisons (54 173 patients) of active vs. control treatment were assessed. Estimated sample sizes differed significantly depending on the method of calculation ( PConclusions Choosing an ordinal rather than binary method of analysis allows most trials to be, on average, smaller by approximately 28% for a given statistical power. Smaller trial sample sizes may help by reducing time to completion, complexity, and financial expense. However, ordinal methods may not be optimal for interventions which both improve functional outcome and cause hazard in a subset of patients, e.g. thrombolysis.
- Published
- 2008