Intended for healthcare professionals

Editorials

New MRC guidance on evaluating complex interventions

BMJ 2008; 337 doi: https://doi.org/10.1136/bmj.a1937 (Published 22 October 2008) Cite this as: BMJ 2008;337:a1937
  1. Rob Anderson, senior lecturer in health economics
  1. 1PenTAG, Noy Scott House, Royal Devon and Exeter Hospital, Exeter EX2 5DW
  1. rob.anderson{at}pms.ac.uk

    Research Methods and Reporting, doi:10.1136/bmj.a1655

    Clarifying what interventions work by researching how and why they are effective

    It is eight years since the publication of the Medical Research Council’s original report on methods for developing and evaluating randomised controlled trials for complex interventions.1 Although presented as a “discussion document,” the MRC framework and its companion paper have often been cited as authoritative guidance on methods. Other people, however, have found the definition of the complexity of interventions narrow and misconceived,2 and the suggested phases for developing and evaluating complex interventions as unhelpfully similar to commercial drug evaluation. However, the report can probably be credited with stimulating much of the ongoing debate about appropriate methods and concepts in healthcare evaluation—particularly when the intervention of interest is hard to define, hard to evaluate (using conventional experimental methods), or just hard to explain.

    The MRC has now updated its original report (www.mrc.ac.uk/complexinterventionsguidance ) to reflect recent developments in methods and lessons learnt in applying them. The guidance is summarised in the linked article by Craig and colleagues (doi:10.1136/bmj.a1655).3 It has a broader scope than the original version—it covers observational methods as well as randomised controlled trials and implementation as well as the development and evaluation of interventions; it also has a broader definition of complex interventions beyond the core dimension of having multiple components.

    Nevertheless, some people will think that some recent developments in the methodology of evaluation have not been reflected in the new guidance. Firstly, some believe that an approach based on the science of complex systems better explains many behavioural, community, or population level health programmes than conventional evaluation approaches.4 5 This approach is advocated where the processes that the intervention attempts to change, or the interactions between people and resources within an intervention, can be likened to a complex system; there may be feedback loops and other interactions which mean that system level properties emerge (for example, community empowerment or health inequalities), but also that the system may abruptly “jump” from one state to another. Crucially, outcomes cannot be easily predicted from the particular combination of components in the intervention.6

    Secondly, an arguably more conspicuous omission from the new guidance is the lack of explicit acknowledgment of the potential of theory driven evaluation approaches. Interest in these evaluation methods—which essentially assess whether interventions work through an explicit and prospective focus on how and why they are thought to work—has increased considerably, especially since the publication of Pawson and Tilley’s Realistic Evaluation (1997)7 and Connell and Kubisch’s theories of change approach (1998).8 9 However, with a few exceptions,10 these approaches have been used more successfully for systematic reviews than primary research.11 12

    To be fair, the new MRC guidance makes many encouraging references to the use of intervention theory, and not just for guidance in developing or optimising the intervention (which was its main use advocated in the original MRC framework). For example, a “good theoretical understanding of the intervention” is now also advised when choosing suitable outcome measures. The whole document reflects the general shift in health services research away from just asking “what works?” towards asking how and why an intervention or public health programme works or fails in different circumstances. Correspondingly, the new guidance encourages the use of process evaluations alongside outcome evaluations, partly because they can “clarify causal mechanisms and identify contextual factors associated with variation in outcomes.” Several of the included case studies further show the use of process evaluations, in some cases to develop an intervention’s theory.

    As a health economist, I find the recommendations on assessing cost effectiveness disappointingly brief. Crucially, they do not indicate how the different dimensions of complexity of the intervention challenge existing methods for conducting an economic evaluation. Also, by repeating the conventional view that “the main purpose of an economic evaluation is estimation rather than hypothesis testing,” the guidance may unwittingly encourage the status quo. Most economic evaluations are still primarily quantitative evaluations of “black box” interventions—that is, with little or no explicit interest in how and why they generate different effects or place different demands on the use of resources—so evidence for explaining differential cost effectiveness is often speculative rather than empirical.

    This is perhaps unsurprising. With the exception of the recent article by Shiell and colleagues,5 few attempts have been made to bridge the gap between methods of economic evaluation and the broader methodological debates about the definition and evaluation of complex interventions. This is a shame, because economic evaluation is probably the one area of health services research where methodological advances have been driven almost exclusively by the needs of evaluators of pharmaceuticals (rather than directors of public health or service managers).

    It could be argued that the lack of coverage of complexity theory in evaluation or of the use of theory driven approaches to evaluation is because these approaches are based on fundamentally different and unfamiliar paradigms of explanation. Trying to weave them into the MRC guidance might therefore have served only to confuse rather than clarify matters. It is more likely, however, that their omission simply reflects a paucity of practical examples where these approaches have clearly added value—at least in the sense of creating new knowledge that enables policy makers to design more effective interventions or to implement, tailor, or target them better in different populations or service contexts. It is therefore still up to researchers to demonstrate this, as well as research funders—like the MRC with its new methodological remit—to give them the chance to do so.

    Notes

    Cite this as: BMJ 2008;337:a1937

    Footnotes

    • Competing interests: None declared.

    • Provenance and peer review: Commissioned; not externally peer reviewed.

    References