RT Journal Article SR Electronic T1 Alternatives to randomisation in the evaluation of public-health interventions: statistical analysis and causal inference JF Journal of Epidemiology and Community Health JO J Epidemiol Community Health FD BMJ Publishing Group Ltd SP 576 OP 581 DO 10.1136/jech.2008.082610 VO 65 IS 7 A1 S Cousens A1 J Hargreaves A1 C Bonell A1 B Armstrong A1 J Thomas A1 B R Kirkwood A1 R Hayes YR 2011 UL http://jech.bmj.com/content/65/7/576.abstract AB Background In non-randomised evaluations of public-health interventions, statistical methods to control confounding will usually be required. We review approaches to the control of confounding and discuss issues in drawing causal inference from these studies.Methods Non-systematic review of literature and mathematical data-simulation.Results Standard stratification and regression techniques will often be appropriate, but propensity scores may be useful where many confounders need to be controlled, and data are limited. All these techniques require that key putative confounders are measured accurately. Instrumental variables offer, in theory, a solution to the problem of unknown or unmeasured confounders, but identifying an instrument which meets the required conditions will often be challenging. Obtaining measurements of the outcome variable in both intervention and control groups before the intervention is introduced allows balance to be assessed, and these data may be used to help control confounding. However, imbalance in outcome measures at baseline poses challenges for the analysis and interpretation of the evaluation, highlighting the value of adopting a design strategy that maximises the likelihood of achieving balance. Finally, when it is not possible to have any concurrent control group, making multiple measures of outcome pre- and postintervention can enable the estimation of intervention effects with appropriate statistical models.Conclusion For non-randomised designs, careful statistical analysis can help reduce bias by confounding in estimating intervention effects. However, investigators must report their methods thoroughly and be conscious and critical of the assumptions they must make whenever they adopt these designs.