Article Text

Download PDFPDF

Pragmatic, formative process evaluations of complex interventions and why we need more of them
Free
  1. Rhiannon Evans,
  2. Jonathan Scourfield,
  3. Simon Murphy
  1. DECIPHer, School of Social Sciences, Cardiff University, Cardiff, UK
  1. Correspondence to Dr Rhiannon Evans, DECIPHer, School of Social Sciences, Cardiff University, 1-3 Museum Place, Cardiff CF103BD UK; EvansRE8{at}cf.ac.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Recently published guidance on process evaluations by the Medical Research Council's (MRC's) Population Health Sciences Research Network (PHSRN) marks a significant advance in the evaluation of complex public health interventions.1 ,2 In presenting programmes as not just a set of mechanisms of change across multiple socioecological domains, but as an interaction of theory, context and implementation, the guidance extends the remit of evaluation and forces us to reassess the responsiveness of existing methodologies and frameworks. Process evaluations have emerged as vital instruments in reacting to these changing needs, through: the modelling of causal mechanisms; the identification of salient contextual influences; and the monitoring of fidelity and adaptations, which permits the circumvention of type 3 errors.3

While the guidance offers an instructive set of standards, the authors’ acknowledgement that there is no such thing as a ‘typical’ process evaluation1 ensures continued scope for debate and development around this framework. Specifically, the predominant focus on embedding process evaluations within definitive effectiveness trials encourages further theoretical and practical exploration of formative process evaluation. This approach is defined by the preclinical and first phase of the MRC's guidance on the development and evaluation of complex interventions.4 ,5 The preclinical phase involves the development of the intervention's theoretical rationale, primarily through consultation of the relevant literature. Meanwhile, phase 1 focuses on the modelling of processes and outcomes in order to identify underpinning active ingredients and delineate how intervention components combine synergistically to generate outcomes.

One particular conceptual space that needs to be carved out is that of pragmatic formative process evaluation. This may be defined as the application of formative process evaluation criteria to interventions that have ostensibly been formulated, and are likely in routine practice, but have not been subjected to rigorous scientific development and evaluation. These interventions are often distinguishable by their lack of a robust evidence base. The term pragmatic formative process evaluation is used here with intent, in order to achieve consistency in terminology with pragmatic policy trials, and natural experiments to a lesser extent. These approaches also take advantage of expedient evaluation opportunities within real world settings, and are often integrated into the process of disseminating interventions or programmes of legislation, with randomisation being nested within roll-out.

Focus on the development of pragmatic formative process evaluations is largely justified by the abundance of widely practised but non-evidence-based complex approaches in public health. Explanations for this occurrence include: the presumed irrelevance of social equipoise; dissonant policy and research timescales; and the perception that an intervention will not confer harm.6 ,7 However, such interventions are often left out of discussions around formative evaluation. Indeed, the MRC's4 guidance claims that if an intervention is already widely delivered, a modelling and testing phase may often not be essential. Yet, for many practiced interventions the rigorous process of theoretical development and interrogation of implicit causal assumptions may still need to be conducted.

Moreover, even where some understanding of the theory of change is present, it is unlikely that the unintended consequence of interventions will have been sufficiently theorised and empirically explored. For example, our recent pragmatic formative process evaluation of a school-based social and emotional learning intervention, which had been recommended by the Welsh school inspectorate as best practice in managing challenging behaviour,8 indicated a number of potential iatrogenic effects due to a stigmatising and negative targeting process.9

The research frameworks and methodologies employed as part of pragmatic formative process evaluations will likely reflect those used with formative evaluations of novel interventions during the preclinical and first phase of the MRC guidance. This should include (systematic) review to identify the existing evidence base in order to theorise and verify the intervention's active ingredients. Effective consultation with programme developers is of paramount importance, so as to elicit their knowledge, assumptions and understandings around intervention theory.

Inclusion of relevant stakeholders and target populations, largely through qualitative research, is also required to understand contextual influences, unravel implementation procedures, and predict feasibility and acceptability.5 This understanding can help to mitigate implementation practices that comprise theoretical integrity, while allowing interventions to be responsive to specific contextual needs. There are a range of frames that can be drawn on from implementation science to theorise the delivery of interventions within real world settings, with Rogers’ diffusion of innovations theory gaining increased currency.10

There is now a wealth of literature to support the process of knowledge exchange between research and practice, which can be exploited to enhance the conduct of pragmatic formative process evaluations.11 However, there remains a propensity to treat this exchange as unidirectional, with knowledge being disseminated from research to practice. Pragmatic formative evaluation demands a more cyclical understanding of knowledge transfer, with a particular focus on enhancing the methods and modalities that can support the translation of practice-based knowledge and ideas to research. Emerging examples of frameworks that may be adapted to enhance translational research include Spoth et al's12 TSCI Impact Framework. In finding ways of working with frames and methods that foreground the voices of programme developers as well as practitioners, and emphasise the importance of knowledge coproduction, pragmatic formative evaluations have the capacity to contribute towards interventions that have high external validity with more sustainable implementation practices.

Continued development of process evaluation is necessary if public health researchers are to understand and capture messy realities, and to respond to them through effective intervention. To this end, it is important not to condemn policymakers and practitioners for working outside of normative evaluation models, but to focus on flexible evaluation designs and methodologies that can accommodate real world complexities. Pragmatic formative process evaluations are vital as part of this programme of work, as they attend to an additional source of complexity: the expanse of interventions that require evaluation but are already being rolled out or are in routine practice. Pragmatic policy trials and natural experiments also have much to offer here. However, in moving towards more flexible research designs, it is essential to maintain scientific rigour, and one of the central reasons for introducing the concept of pragmatic formative process evaluation is to not only highlight its distinct and vital contribution in the development and evaluation of interventions, but to ensure that it is acknowledged and valued. Otherwise, there is a risk that this evaluative phase does not lay within anyone's remit of responsibility and accountability, but rather falls within the gap between research and practice, thus preventing potentially promising interventions from realising their full effects.

Acknowledgments

The work was undertaken with the support of The Centre for the Development and Evaluation of Complex Interventions for Public Health Improvement (DECIPHer), a UKCRC Public Health Research: Centre of Excellence. Funding from the British Heart Foundation, Cancer Research UK, Economic and Social Research Council (RES-590-28-0005), Medical Research Council, the Welsh Government and the Wellcome Trust (WT087640MA), under the auspices of the UK Clinical Research Collaboration, is gratefully acknowledged.

References

Footnotes

  • Competing interests None.

  • Provenance and peer review Not commissioned; externally peer reviewed.