Article Text

Download PDFPDF
Theory and methods
Alternatives to randomisation in the evaluation of public-health interventions: statistical analysis and causal inference
  1. S Cousens1,
  2. J Hargreaves1,
  3. C Bonell1,
  4. B Armstrong1,
  5. J Thomas2,
  6. B R Kirkwood1,
  7. R Hayes1
  1. 1London School of Hygiene and Tropical Medicine, London, UK
  2. 2EPPI Centre, Social Science Research Unit, Institute of Education, University of London, London, UK
  1. Correspondence to Professor Simon Cousens, Infectious Disease Epidemiology Unit, London School of Hygiene and Tropical Medicine, Keppel Street, London WC1E 7HT, UK; simon.cousens{at}lshtm.ac.uk

Abstract

Background In non-randomised evaluations of public-health interventions, statistical methods to control confounding will usually be required. We review approaches to the control of confounding and discuss issues in drawing causal inference from these studies.

Methods Non-systematic review of literature and mathematical data-simulation.

Results Standard stratification and regression techniques will often be appropriate, but propensity scores may be useful where many confounders need to be controlled, and data are limited. All these techniques require that key putative confounders are measured accurately. Instrumental variables offer, in theory, a solution to the problem of unknown or unmeasured confounders, but identifying an instrument which meets the required conditions will often be challenging. Obtaining measurements of the outcome variable in both intervention and control groups before the intervention is introduced allows balance to be assessed, and these data may be used to help control confounding. However, imbalance in outcome measures at baseline poses challenges for the analysis and interpretation of the evaluation, highlighting the value of adopting a design strategy that maximises the likelihood of achieving balance. Finally, when it is not possible to have any concurrent control group, making multiple measures of outcome pre- and postintervention can enable the estimation of intervention effects with appropriate statistical models.

Conclusion For non-randomised designs, careful statistical analysis can help reduce bias by confounding in estimating intervention effects. However, investigators must report their methods thoroughly and be conscious and critical of the assumptions they must make whenever they adopt these designs.

  • Evaluation studies
  • statistics
  • confounding
  • propensity scores
  • instrumental variables
  • public health
  • randomised trails

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Footnotes

  • See Commentary, p 596

  • Funding JH was supported by an MRC/ESRC interdisciplinary postdoctoral fellowship.

  • Competing interests None.

  • Provenance and peer review Not commissioned; externally peer reviewed.

Linked Articles