Article Text
Abstract
Introduction There has been growing attention to using effectiveness evidence to guide public health and health improvement policies, strategies, programmes, and actions “on the ground.” However, there has only been partial recognition of complications that have a material bearing on how such evidence is interpreted and translated into action. This paper aims to shed further light on such complications, capitalising on the authors' previous roles as “evidence originators” in academia and their more recently gained perspectives as “evidence interpreters” in national agencies.
Methods A particular randomised controlled trial in which the authors were directly involved—based on a workplace cardiovascular disease prevention programme—was revisited and used as an illustrative case study to elucidate important considerations in assessing and applying effectiveness evidence more generally.
Results Relatively obvious, and less obvious, complicating factors were identified in relation to defining the intervention, judging effectiveness, and transferability of findings. In addition, some “bigger picture” considerations were described, with individual interventions viewed as pieces in the health improvement “jigsaw” or as frames in the “movie” of ever-changing influences on population health.
Conclusion The layers of complexity uncovered in this work should be taken into account in designing, executing and reporting primary evaluative studies and reviews, in formulating recommendations for action, and in developing more fully fit-for-purpose approaches to evidence-informed public health and health improvement.