Statistics from Altmetric.com
Lack of time, funds or other resources are the explanations that have been given by clinical researchers for failure to publish all the results of large randomised trials.1 It has been estimated that 40–62% of trials have introduced new variables into the study and/or omitted others.2 This insight has been gained by comparing trial protocols with publications. The International Clinical Trials Registry Platform was established in response to such observations. One goal was to prevent outcome reporting bias, that is, where only a selection of a trial's outcomes are reported, based on the result, leading to a biased view of an intervention's effect.3
Inspired by this, in 2007 the Cochrane Health Promotion and Public Health Field led the call for a register for public health interventions as well, adapted to the diversity of methods used to assess interventions in public health.4 The paper by Pearson and Peters in this issue of the journal echoes this call, and supports it by assessing the extent of possible under-reporting of outcomes in the papers within one of their own systematic reviews of interventions to reduce unintentional injuries to children at home.5 Readers may take issue with the strength of their investigation, based as it is on assumptions about the missing data. However, in terms of their overall recommendation, there is no strong reason to believe that researchers in public health would be any less prone to publication bias in reporting their outcomes than their clinical counterparts. The field of public health intervention research would indeed benefit from such a register, given the observation that process evaluation is also under-reported.6
Under-reporting plays havoc with the interpretations of the effectiveness of population-level interventions to promote health and reduce health inequity. In so doing, we let down the society we profess to serve.
So, one has to ask, how did it get this way and what more can be done?
One overlooked underlying contributor to the under-reporting problem, other than the usual things we like to blame (eg, word limits in journals), may be the exaggerated need to present a neat coherent story, and the consequent temptation to narrow the frame of thinking and leave out the parts that do not fit.
One of the Cochrane group's recommendations is that we register the logic or theory of the change process under investigation, prior to embarking on the investigation. Basically, this means stating: ‘we're going to capture this phenomenon with these variables for these reasons’. We should be held accountable for that theory.
But it would not be research if those views were not challenged and revised by field experience as well. This means that part and parcel of getting to the end of a paper should be not only to account for what context, process and outcome variables fitted the way it was planned, but also to describe what fitted in a different way than originally expected (which one can only make sense of retrospectively). We should also state what remained untouched and not explained. It is not shameful to have material remaining in the last category. But presently researchers may well be feeling a sense of shame, because that is what Pearson and Peters feel is being omitted.
So here is the recommendation. In public health interventions we need to call an amnesty on all unreported results of intervention studies, following the calls made by our clinical counterparts,7 but not confining our interest to trial designs. We also need to invite submission of untold and even contradictory stories from process evaluation of interventions, and interactions in context, recognising that until recently many researchers felt that variation in interventions across sites was something to be frowned on, rather than valued and understood.8 This means putting aside arbitrary ideas that only results from the last 5 years are of interest to readers. If authors can make a case that their results are still relevant, then let us see them. Finally, we need to acknowledge and legitimise authors' accounts of what they (currently) cannot explain, provided that they can justify why the variables were included in the first place.
Holistic synthesis and sense-making across the entire phenomenon observed in an intervention study asks us to be scholars, rather than just researchers, and to be theorists always—a call others make emphatically in epdemiology.9 10 If authors cannot explain the puzzle fully themselves, their experience provides the raw material for others to build on. Authors are too accustomed to ducking this challenge. It is time to make a change.
Provenance and peer review Commissioned; not externally peer reviewed.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.