Article Text
Abstract
Background There has been a recent increase in interest in alternatives to randomisation in the evaluation of public health interventions. We aim to describe specific scenarios in which randomised trials may not be possible and describe, exemplify and assess alternative strategies.
Methods Non-systematic exploratory review.
Results In many scenarios barriers are surmountable so that randomised trials (including stepped-wedge and crossover trials) are possible. It is possible to rank alternative designs but context will also determine which choices are preferable. Evidence from non-randomised designs is more convincing when confounders are well-understood, measured and controlled; there is evidence for causal pathways linking intervention and outcomes and/or against other pathways explaining outcomes; and effect sizes are large.
Conclusion Non-randomised trials might provide adequate evidence to inform decisions when interventions are demonstrably feasible and acceptable, and where evidence suggests there is little potential for harm, but caution that such designs may not provide adequate evidence when intervention feasibility or acceptability is doubtful, and where existing evidence suggests benefits may be marginal and/or harms possible.
- Evaluation me
- public health policy
- randomised trials
Statistics from Altmetric.com
Linked Articles
- Feature section: interventions
- Feature section: interventions