Article Text

Download PDFPDF

Using natural experimental studies to guide public health action: turning the evidence-based medicine paradigm on its head
  1. David Ogilvie1,
  2. Jean Adams1,
  3. Adrian Bauman2,
  4. Edward W. Gregg3,
  5. Jenna Panter1,
  6. Karen R. Siegel4,
  7. Nicholas J. Wareham1,
  8. Martin White1
  1. 1 MRC Epidemiology Unit and Centre for Diet and Activity Research (CEDAR), University of Cambridge, Cambridge, UK
  2. 2 Charles Perkins Centre and Prevention Research Collaboration, University of Sydney, Sydney, New South Wales, Australia
  3. 3 School of Public Health, Imperial College, London, UK
  4. 4 National Center for Chronic Disease Prevention and Health Promotion, Centers for Disease Control and Prevention, Atlanta, Georgia, USA
  1. Correspondence to Dr David Ogilvie, MRC Epidemiology Unit and Centre for Diet and Activity Research (CEDAR), University of Cambridge, Cambridge, UK; david.ogilvie{at}


Despite smaller effect sizes, interventions delivered at population level to prevent non-communicable diseases generally have greater reach, impact and equity than those delivered to high-risk groups. Nevertheless, how to shift population behaviour patterns in this way remains one of the greatest uncertainties for research and policy. Evidence about behaviour change interventions that are easier to evaluate tends to overshadow that for population-wide and system-wide approaches that generate and sustain healthier behaviours. Population health interventions are often implemented as natural experiments, which makes their evaluation more complex and unpredictable than a typical randomised controlled trial (RCT). We discuss the growing importance of evaluating natural experiments and their distinctive contribution to the evidence for public health policy. We contrast the established evidence-based practice pathway, in which RCTs generate ‘definitive’ evidence for particular interventions, with a practice-based evidence pathway in which evaluation can help adjust the compass bearing of existing policy. We propose that intervention studies should focus on reducing critical uncertainties, that non-randomised study designs should be embraced rather than tolerated and that a more nuanced approach to appraising the utility of diverse types of evidence is required. The complex evidence needed to guide public health action is not necessarily the same as that which is needed to provide an unbiased effect size estimate. The practice-based evidence pathway is neither inferior nor merely the best available when all else fails. It is often the only way to generate meaningful evidence to address critical questions about investing in population health interventions.

  • evaluation
  • natural experimental studies
  • non-randomised studies
  • practice-based evidence
  • public health policy

This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See:

View Full Text

Statistics from


  • Twitter @, @, @, @, @

  • Contributors DO conceived of the original idea and drafted the initial manuscript. JA, AB, EWG, JP, KRS, NJW and MW provided critical feedback and contributed to the final version of the manuscript. DO is the guarantor.

  • Funding DO, JP and NJW are supported by the Medical Research Council (Unit Programme numbers MC_UU_12015/6 and MC_UU_12015/1). The paper was initially developed in the course of a visiting appointment as Thought Leader in Residence at the School of Public Health at the University of Sydney, for which the intellectual environment and financial support provided by the Prevention Research Collaboration is gratefully acknowledged. It was further developed under the auspices of the Centre for Diet and Activity Research (CEDAR), a UKCRC Public Health Research Centre of Excellence at the University of Cambridge, for which funding from the British Heart Foundation, Economic and Social Research Council, Medical Research Council, National Institute for Health Research and the Wellcome Trust, under the auspices of the United Kingdom Clinical Research Collaboration, is gratefully acknowledged; and through the authors’ collaboration in organising a workshop on the evaluation of natural experiments of social and environmental interventions with potential impacts on population risk of diabetes and cardiometabolic disease in Atlanta on 7–8 March 2017, at which much of the content was presented and for which funding from the Centers for Disease Control and Prevention is gratefully acknowledged.

  • Disclaimer The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Prevention or other funders mentioned.

  • Competing interests None declared.

  • Patient consent for publication Not required.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data availability statement There are no data in this work.

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.