Background There has been a recent increase in interest in alternatives to randomisation in the evaluation of public health interventions. We aim to describe specific scenarios in which randomised trials may not be possible and describe, exemplify and assess alternative strategies.
Methods Non-systematic exploratory review.
Results In many scenarios barriers are surmountable so that randomised trials (including stepped-wedge and crossover trials) are possible. It is possible to rank alternative designs but context will also determine which choices are preferable. Evidence from non-randomised designs is more convincing when confounders are well-understood, measured and controlled; there is evidence for causal pathways linking intervention and outcomes and/or against other pathways explaining outcomes; and effect sizes are large.
Conclusion Non-randomised trials might provide adequate evidence to inform decisions when interventions are demonstrably feasible and acceptable, and where evidence suggests there is little potential for harm, but caution that such designs may not provide adequate evidence when intervention feasibility or acceptability is doubtful, and where existing evidence suggests benefits may be marginal and/or harms possible.
- Evaluation me
- public health policy
- randomised trials
Statistics from Altmetric.com
Funding The work was unfunded. JH is supported by a MRC/ESRC interdisciplinary postdoctoral fellowship.
Competing interests None.
Ethics approval Not required.
Provenance and peer review Not commissioned; externally peer reviewed.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.