Article Text

Download PDFPDF

Health Services Research and Policy
Assessing the evaluability of complex public health interventions: Five questions for researchers, funders and policymakers
Free
  1. D Ogilvie1*,
  2. S Cummins2,
  3. M Petticrew3,
  4. M White4,
  5. A Jones5,
  6. K Wheeler2
  1. 1MRC Epidemiology Unit & UKCRC Centre for Diet and Activity Research (CEDAR), Institute of Public Health, Cambridge, UK
  2. 2Healthy Environments Research Programme, School of Geography, Queen Mary, University of London, London, UK
  3. 3Department of Social and Environmental Health Research, London School of Hygiene and Tropical Medicine, London, UK
  4. 4Institute of Health and Society and UKCRC Centre for Translational Research, Newcastle University, Newcastle, UK
  5. 5School of Environmental Sciences, University of East Anglia, Norwich, UK

Abstract

Background Government policies and programmes to improve public health can often be regarded as complex interventions, in that they typically involve the flexible or tailored implementation of multiple interacting activities in a variety of settings to bring about population behaviour change and health improvement. However, evidence to support their development and implementation is often weak. Recognition of this ‘knowledge gap’ has led to repeated calls for more and better evaluation of the health impact of these complex ‘natural experiments’. Few may disagree in principle with the evaluative ‘call to arms’, but its implementation raises a number of scientific, practical and prioritisation issues, especially in a climate of public sector financial restraint.

Objectives To develop an approach to appraising the evaluability of complex public health interventions, which stimulates and structures debate between researchers, funders and policymakers and helps them make decisions about evaluation within and between interventions as they evolve from initial concept to roll-out of full-scale intervention packages.

Methods Using the Healthy Community Challenge Fund (‘Healthy Towns’) in England as a case study of a complex intervention programme, and worked examples of two specific interventions within that programme, we have developed a set of five questions in the spirit of the Bradford Hill criteria: (1) Where is a particular intervention situated in the evolutionary flowchart of an overall intervention programme? (2) What difference will an evaluative study of this intervention make to policy decisions? (3) What are the plausible sizes and distribution of the hypothesised impacts of the intervention? (4) How will the findings of an evaluative study add value to the existing body of scientific evidence? (5) Is it practicable to evaluate the intervention in the time available?

Results Using the specific worked examples of ‘family health hubs’ and ‘healthy urban planning’, we show how our approach can be used to identify the types of knowledge that might be generated from any possible evaluation given the strength of evidence available in response to each of the five questions, and to support more systematic consideration of resource allocation decisions depending on the types of knowledge required.

Conclusions The principles of our approach are potentially generalisable and could be tested and refined in the context of other complex public health and wider social interventions.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.