Alternatives to randomisation in the evaluation of public-health interventions: statistical analysis and causal inference
- 1London School of Hygiene and Tropical Medicine, London, UK
- 2EPPI Centre, Social Science Research Unit, Institute of Education, University of London, London, UK
- Correspondence to Professor Simon Cousens, Infectious Disease Epidemiology Unit, London School of Hygiene and Tropical Medicine, Keppel Street, London WC1E 7HT, UK;
Contributors SC drafted the paper, reviewed the material which informs it and developed the simulation of ‘Lords paradox’. JH revised the paper and suggested the inclusion of a simulation. CB, BA and JT contributed to the drafting. BRK and RH suggested examples and arguments for the paper and commented on successive drafts. All authors participated in writing this paper and have seen and approved the final version. SC had final responsibility for the decision to submit the paper for publication.
- Accepted 15 May 2009
- Published Online First 6 August 2009
Background In non-randomised evaluations of public-health interventions, statistical methods to control confounding will usually be required. We review approaches to the control of confounding and discuss issues in drawing causal inference from these studies.
Methods Non-systematic review of literature and mathematical data-simulation.
Results Standard stratification and regression techniques will often be appropriate, but propensity scores may be useful where many confounders need to be controlled, and data are limited. All these techniques require that key putative confounders are measured accurately. Instrumental variables offer, in theory, a solution to the problem of unknown or unmeasured confounders, but identifying an instrument which meets the required conditions will often be challenging. Obtaining measurements of the outcome variable in both intervention and control groups before the intervention is introduced allows balance to be assessed, and these data may be used to help control confounding. However, imbalance in outcome measures at baseline poses challenges for the analysis and interpretation of the evaluation, highlighting the value of adopting a design strategy that maximises the likelihood of achieving balance. Finally, when it is not possible to have any concurrent control group, making multiple measures of outcome pre- and postintervention can enable the estimation of intervention effects with appropriate statistical models.
Conclusion For non-randomised designs, careful statistical analysis can help reduce bias by confounding in estimating intervention effects. However, investigators must report their methods thoroughly and be conscious and critical of the assumptions they must make whenever they adopt these designs.
- Evaluation studies
- propensity scores
- instrumental variables
- public health
- randomised trails
See Commentary, p 596
Funding JH was supported by an MRC/ESRC interdisciplinary postdoctoral fellowship.
Competing interests None.
Provenance and peer review Not commissioned; externally peer reviewed.