Considering Complexity in Systematic Review of Interventions
Complex interventions and their implications for systematic reviews: a pragmatic approach

https://doi.org/10.1016/j.jclinepi.2013.06.004Get rights and content

Abstract

Complex interventions present unique challenges for systematic reviews. Current debates tend to center around describing complexity, rather than providing guidance on what to do about it. At a series of meetings during 2009–2012, we met to review the challenges and practical steps reviewer could take to incorporate a complexity perspective into systematic reviews. Based on this, we outline a pragmatic approach to dealing with complexity, beginning, as for any review, with clearly defining the research question(s). We argue that reviews of complex interventions can themselves be simple or complex, depending on the question to be answered. In systematic reviews and evaluations of complex interventions, it will be helpful to start by identifying the sources of complexity, then mapping aspects of complexity in the intervention onto the appropriate sources of evidence (such as specific types of quantitative or qualitative study). Although we focus on systematic reviews, the general approach is also applicable to primary research that is aimed at evaluating complex interventions. Although the examples are drawn from health care, the approach may also be applied to other sectors (e.g., social policy or international development). We end by concluding that systematic reviews should follow the principle of Occam's razor: explanations should be as complex as they need to be and no more.

Introduction

What is new?

Key findings

  1. Where complexity is a focus of a review, the most important first steps are to clarify the review question and whether it is really about complexity; to identify the sources of complexity in the intervention; and to identify what type of study should be sought as evidence of those aspects of complexity.

What this adds to what was known?
  1. It is a challenge for systematic reviewers to produce reviews that incorporate a “complexity perspective,” where the review question and methods take account of complexity in the intervention. This article sets out an approach to thinking about sources of complexity in interventions, and how this can be mapped onto specific types of study.

What is the implication and what should change now?
  1. Much academic discussion is focused on describing aspects of complexity, rather than identifying its practical implications. Although it may be useful to identify sources of complexity in an intervention's implementation and effects, it does not always follow that it is essential to adopt correspondingly complex review methods.

There is considerable interest among practitioners, policymakers, and researchers in how evidence of the effects of complex interventions can be produced and synthesized. This interest is not new; the first workshop on systematic reviews of complex interventions was organized at the 1994 Cochrane Colloquium, with a report the next year [1]; and the first detailed guidance on the design and evaluation of complex interventions to improve health was issued in 2000 [2], [3]. This interest stems partly from the need to develop further the evidence base of the effectiveness of health care and public health interventions, along with an awareness that synthesizing this evidence becomes more challenging as one moves along the spectrum from simpler toward more complex interventions. Another driver is debates about the most appropriate methods of evaluating health systems, and the recognition that it is important to know not only just whether health system interventions work but also about when, why, how, and in what circumstances such interventions work well [4], [5].

Evaluations of interventions in health care and other systems therefore tend to involve collecting a range of qualitative and other evidence to explain processes and help understand how the intervention interacts with its context. Not all these data may be scientific: Shepperd et al. [6] noted the role of nonacademic evidence such as policy documents. The challenge for systematic reviewers is therefore to produce reviews that incorporate a “complexity perspective,” where the review question and methods take account of complexity in the intervention and then identify, analyze, and integrate heterogeneous evidence to help understand its processes and outcomes. The further challenge is to do this in a way that it results in a review that is meaningful and useful to decision-makers.

However, although it is easy to describe aspects of complexity, it is less clear methodologically what one might do about it in a systematic review. The risk here is that complexity simply becomes a descripto—we are keen to describe our interventions as complex as that attract funding and publication but are less clear on the practical implications.

This article describes a pragmatic approach to dealing with complexity in systematic reviews that focus on the research question and on research users' needs. It proposes that, where complexity is a focus of a review, the essential first steps are to clarify the review question and whether it is really about complexity; to identify the sources of complexity in the intervention; and to identify what type of study should be sought as evidence of those aspects of complexity. It also notes that although it may be useful to identify sources of complexity in an intervention's implementation and effects, it does not always follow that it is essential to adopt correspondingly complex review methods. In describing this approach, we focus on the degree of complexity of our models of reality, rather than that of reality itself, as this is beyond the scope of the article.

Section snippets

Aspects of complexity, and what to do about them

There are many sources of complexity in systematic reviews. Grimshaw et al. [1] noted complexity because of the characteristics of the intervention, contextual factors, multiple outcomes, and research factors, in which, for example, the data collection methods act as an effect modifier (or moderator). The research question itself may also be complex (e.g., it may not be confined to a single intervention but may relate to a package of interventions), and the evidence to answer that question may

Multiplicity of outcomes

As noted previously, multiple outcomes may be a feature of complex interventions, but they are not specific to them. Shiell et al. [9] make the important point that simpler interventions, such as vaccination, also have externalities but we often choose to ignore them. Whether to incorporate a single outcome or a multiplicity of outcomes in a systematic review of a complex intervention represents a choice (on the part of the reviewer, stakeholders, or funders) and may be a consequence of

Conclusions

This is not a recommendation that systematic reviewers confine themselves to taking a simple approach to complex interventions but instead to recognize that although most interventions involve elements of complexity, decisions regarding whether and how to address that complexity reflect choices on the part of the researcher. In systematic reviews, the reality we deal with is inherently complex, and only our models of it can be simple. In some cases, it may be most appropriate to set aside this

Acknowledgments

Provenance and contributorship: Most of the authors are systematic reviewers involved in developing review methods, as part of groups including the Cochrane/Campbell Health Equity Methods Group; the Cochrane Public Health Review Group; the Cochrane Effective Practice and Organisation of Care Group; and the Centers for Disease Control Community Guide, which among other things conducts systematic reviews as well as develops and refines systematic review methods. The group held a series of

References (27)

  • A. Mills et al.

    What do we mean by rigorous health-systems research?

    Lancet

    (2008)
  • K. Roberts et al.

    Factors affecting uptake of childhood immunisation: a Bayesian synthesis of qualitative and quantitative evidence

    Lancet

    (2002)
  • J. Grimshaw et al.

    Complexity and systematic reviews. Report to the U.S. Congress of Technology Assessment

    (1995)
  • M. Campbell et al.

    Framework for design and evaluation of complex interventions to improve health

    BMJ

    (2000)
  • P. Craig et al.

    Developing and evaluating complex interventions: the new Medical Research Council guidance

    BMJ

    (2008)
  • J. Webster et al.

    Evaluating delivery systems: complex evaluations and plausibility inference

    Am J Trop Med Hyg

    (2010)
  • S. Shepperd et al.

    Can we systematically review studies that evaluate complex interventions?

    PLoS Med

    (2009)
  • L. Rychetnik et al.

    Criteria for evaluating evidence on public health interventions

    J Epidemiol Community Health

    (2002)
  • P. Hawe et al.

    Theorising interventions as events in systems

    Am J Community Psychol

    (2009)
  • A. Shiell et al.

    Complex interventions or complex systems? Implications for health economic evaluation

    BMJ

    (2008)
  • P. Hawe et al.

    Complex interventions: how “out of control” can a randomised controlled trial be?

    BMJ

    (2004)
  • D. De Savigny et al.

    Systems thinking for health systems strengthening: Alliance for Health Policy and Systems Research

    (2009)
  • R. Emsley et al.

    Mediation and moderation of treatment effects in randomised controlled trials of complex interventions

    Stat Methods Med Res

    (2010)
  • Cited by (0)

    Financial disclosure: No relevant conflicts relating to this article. M.P. receives funding from the CIHR funded International Collaboration on Complex Interventions, which supported some of the work reported in this article.

    View full text