PT - JOURNAL ARTICLE AU - D Ogilvie AU - S Cummins AU - M Petticrew AU - M White AU - A Jones AU - K Wheeler TI - Assessing the evaluability of complex public health interventions: Five questions for researchers, funders and policymakers AID - 10.1136/jech.2011.143586.6 DP - 2011 Sep 01 TA - Journal of Epidemiology and Community Health PG - A3--A3 VI - 65 IP - Suppl 2 4099 - http://jech.bmj.com/content/65/Suppl_2/A3.1.short 4100 - http://jech.bmj.com/content/65/Suppl_2/A3.1.full SO - J Epidemiol Community Health2011 Sep 01; 65 AB - Background Government policies and programmes to improve public health can often be regarded as complex interventions, in that they typically involve the flexible or tailored implementation of multiple interacting activities in a variety of settings to bring about population behaviour change and health improvement. However, evidence to support their development and implementation is often weak. Recognition of this ‘knowledge gap’ has led to repeated calls for more and better evaluation of the health impact of these complex ‘natural experiments’. Few may disagree in principle with the evaluative ‘call to arms’, but its implementation raises a number of scientific, practical and prioritisation issues, especially in a climate of public sector financial restraint. Objectives To develop an approach to appraising the evaluability of complex public health interventions, which stimulates and structures debate between researchers, funders and policymakers and helps them make decisions about evaluation within and between interventions as they evolve from initial concept to roll-out of full-scale intervention packages. Methods Using the Healthy Community Challenge Fund (‘Healthy Towns’) in England as a case study of a complex intervention programme, and worked examples of two specific interventions within that programme, we have developed a set of five questions in the spirit of the Bradford Hill criteria: (1) Where is a particular intervention situated in the evolutionary flowchart of an overall intervention programme? (2) What difference will an evaluative study of this intervention make to policy decisions? (3) What are the plausible sizes and distribution of the hypothesised impacts of the intervention? (4) How will the findings of an evaluative study add value to the existing body of scientific evidence? (5) Is it practicable to evaluate the intervention in the time available? Results Using the specific worked examples of ‘family health hubs’ and ‘healthy urban planning’, we show how our approach can be used to identify the types of knowledge that might be generated from any possible evaluation given the strength of evidence available in response to each of the five questions, and to support more systematic consideration of resource allocation decisions depending on the types of knowledge required. Conclusions The principles of our approach are potentially generalisable and could be tested and refined in the context of other complex public health and wider social interventions.