Article Text

Download PDFPDF

Outcome reporting bias in evaluations of public health interventions: evidence of impact and the potential role of a study register
Free
  1. Mark Pearson,
  2. Jaime Peters
  1. Peninsula Technology Assessment Group (PenTAG), Peninsula Medical School, University of Exeter, Exeter, UK
  1. Correspondence to Mark Pearson, Peninsula Technology Assessment Group (PenTAG), Peninsula Medical School, University of Exeter, Veysey Building, Salmon Pool Lane, Exeter EX2 4SG, UK; mark.pearson{at}pms.ac.uk

Abstract

Background Systematic reviews of the effectiveness of interventions are increasingly used to inform recommendations for public health policy and practice, but outcome reporting bias is rarely assessed.

Methods Studies excluded at full-text stage screening for a systematic review of a public health intervention were assessed for evidence of study exclusion resulting from non-reporting of relevant outcomes. Studies included in the review were assessed for evidence of outcome reporting bias and the impact that this had on the evidence synthesised using a formal tool (Outcome Reporting Bias in Trials (ORBIT)).

Results None of the reports excluded at full-text stage were excluded because of non-reporting of relevant outcomes. Of the 26 included papers, six were identified as having evidence of missing or incompletely reported outcomes, with 64% of unreported or incompletely reported outcomes identified as to leading to a high risk of bias according to the ORBIT tool. Where there was evidence of the effectiveness of interventions before an assessment of outcome reporting bias was undertaken, identifying possible instances of outcome reporting bias generally led to a reduction in the strength of evidence for the effectiveness of the interventions.

Conclusion The findings from this single evaluation provide empirical data to support the call for a prospective public health interventions study registry to aid the identification of unreported or incompletely reported outcomes. Critical appraisal tools can also be used to identify incompletely reported outcomes, but a tool such as ORBIT requires development to be suitable for public health intervention evaluations.

  • Selective reporting
  • outcome reporting bias
  • systematic reviews
  • public health
  • child accidents
  • public health policy
  • registers

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Systematic reviews of the effectiveness of interventions are increasingly used by decision-making bodies as a source of evidence to inform recommendations for public health policy and practice. While the rigour of studies included in public health systematic reviews is routinely assessed using a quality appraisal tool,1–3 with the exception of the Cochrane risk of bias tool,1 none assesses potential bias to a subsequent review from missing or incompletely reported results. Outcome reporting bias occurs when outcomes are selectively reported on the basis of their statistical significance (eg, p<0.05), the magnitude of their estimated effect or their perceived interest to the study authors or intended readers.2 3 Selective outcome reporting is similar to publication bias, but rather than having a whole study missing from the literature, selective outcome reporting describes the situation where certain outcomes within a study are missing.

The lack of appraisal of outcome reporting bias in the public health literature may be due to the wide range of study designs that are appropriate to assess complex public health interventions.4–6 The resulting diversity in reporting practice7 makes the assessment of outcome reporting bias considerably more difficult. However, there is considerable evidence that outcome reporting bias exists in randomised controlled trials (RCTs)8 9 and impacts on systematic reviews of RCTs.10 A potential solution is the prospective registration of RCTs, backed up by journal editorial policies, which make publication conditional on prospective registration having taken place. Recently, a call for a prospective public health interventions study register reflecting the range of study designs appropriate to the evaluation of complex public health interventions was made.11 As with selective reporting in RCTs, this would help overcome the problem of reporting bias in public health studies. The aims of this paper were to demonstrate the use of a formal tool (Outcome Reporting Bias in Trials (ORBIT)10) to appraise studies included in a systematic review of public health interventions for the existence of outcome reporting bias and to assess the impact that this bias (where present) had on the evidence that was used to inform the development of guidance. The systematic review12 included 26 papers, reporting 22 studies (10 RCTs, three cluster RCTs, four controlled before and after studies and five uncontrolled before and after studies).

Methods

We purposively selected a systematic review of a public health intervention in which one of us (MP) was the lead author12 in order to assess the included studies for evidence of outcome reporting bias and the impact that this had on the evidence synthesised. First, we assessed all the papers excluded at full-text stage (at which screening for inclusion in the review occurred; n=154) for evidence of study exclusion resulting from non-reporting of relevant outcomes. A further 5272 abstracts were not assessed in this way because they were clearly excludable as not relevant to the review question. Second, we critically re-read each of the 26 papers included in the systematic review to identify missing or incompletely reported outcomes. For example, in our critical re-reading, we attempted to reconcile the reporting in the methods and results sections to determine whether an outcome was likely to have been measured and analysed but not reported. Once we had identified papers with missing or incompletely reported outcomes, we (MP and JP) discussed three papers with respect to the ORBIT tool and the classification of missing or incompletely reported outcomes as low, high or no risk of bias.10 The remaining 10 papers were classified by one of the authors (JP) for instances of missing or incompletely reported outcomes using the ORBIT tool. Each classification was then discussed between the two authors (MP and JP) in order to assure consistency or to reach a consensus. Difficulties encountered in applying the ORBIT tool to the public health intervention studies meant that all 10 of these papers (in addition to the three initially discussed) required in-depth discussion in order to reach a consensus on the classification of outcome reporting bias.

We applied the ORBIT classification system10 to assess (1) the level of reporting of the outcome data (partial or none), (2) the likely risk of bias to each review from the exclusion of these data (no, low or high risk) and (3) the impact that inclusion of these data may have had on the findings of the review. The risk of bias is informed by the assumed reasons for not reporting the full outcome. If the non-reporting is considered to be related to the results of an analysis (eg, a statistically non-significant effect is observed), this leads to an assumption of there being a high risk of bias by not including this piece of evidence in the review. The risk of bias is defined by Kirkham and colleagues10 as ‘arising from the lack of inclusion of non-significant results when a trial was excluded from a meta-analysis or not fully reported in a review because the data were unavailable’.

We tabulated our assessment by type of outcome (eg, injuries, installation of home safety equipment, knowledge and behaviour) alongside the results of included studies where no instances of missing or incompletely reported outcomes were identified (see online-only material file 1). This enabled us to assess the impact on review findings due to missing or incompletely reported outcomes. In particular, we wanted to know whether the findings from the initial review would change in the light of the assessment for selective reporting within studies. This was achieved by making a judgement on the weight and strength of evidence. Statistical methods could not be used (as in Kirkham et al10) as these are based on the premise that a meta-analysis is undertaken, but a meta-analysis of the public health studies was not deemed appropriate for this review.

Results

Not one of the 154 reports excluded at full-text stage were excluded because of non-reporting of relevant outcomes. Of the 26 papers included in the original systematic review (providing 58 estimates of the effectiveness of different interventions), six were identified as having evidence of missing or incompletely reported outcomes, contributing to a total of 19 instances of missing or incompletely reported outcomes (table 1).

Table 1

Classification of instances of no or incomplete reporting in systematic review of public health interventions to reduce unintentional injuries to children in the home

According to the ORBIT rating, 12 of the 19 instances (64%) of unreported or incomplete reporting of outcomes were identified to lead to a high risk of bias to the findings of the review. The most common classification for incomplete reporting or non-reporting of outcomes in the review was ORBIT rating E (37%): judgement indicates that outcome was measured and analysed but not reported due to its results. In 26% of the 19 instances of unreported or incompletely reported outcomes, results were only reported as ‘not significant’ (typically stating p>0.05) (ORBIT rating A). Only three instances (16%) of missing or incompletely reported outcomes were assessed to be of low risk to the review because of their exclusion (all ORBIT rating F). However, four instances (18%) of missing or incompletely reported outcomes were assessed to not lead to bias in the review (all ORBIT rating B).

The impact of these missing data on the findings of the review is shown in table 2. More detailed information on the individual study results and assessment of outcome reporting bias can be found in online-only material file 1. Where evidence for an effect of interventions had been identified in the original review,12 identifying possible instances of selective reporting with a high risk of bias due to its exclusion generally led to a reduction in the strength of evidence for the effectiveness of the interventions. Assessing for outcome reporting bias resulted in moderating the evidence of intervention effects only in two injury types: one was moderated from strong evidence of effect to some evidence of effect (home safety knowledge about prevention of scalds) and one from some evidence of effect to inconclusive results (parental self-efficacy for home safety) (see table 2). Interventions for the six injury types that were inconclusive in the original review remained so after assessment using the ORBIT tool (see table 2). For home safety knowledge type (prevention of poisoning) and smoke alarm installation, strong evidence for the effectiveness of interventions remained after an assessment of selective reporting was made. The evidence for the effectiveness of interventions for electric socket cover installation remained after investigation of selective reporting.

Table 2

Strength of evidence before and after an assessment of possible reporting bias

Discussion

This study demonstrates that statistical significance is the most common cause of reporting bias in a review of public health studies. Consequently, most instances of unreported or incomplete reported outcomes identified here were rated as leading to a high risk of bias in the review. The likely impact of the unreported or incompletely reported outcomes was mixed: it either had little impact or reduced the strength of evidence for an effect of the intervention. The latter is similar to the findings from Kirkham and colleagues10 that effectiveness estimates had been overestimated.

Many limitations of this study relate to the heterogeneity of the studies included in the review, both in terms of their design and their reporting, which hindered (1) the identification of missing or incompletely reported outcomes, (2) using ORBIT to classify instances of missing or incompletely reported outcomes as to their potential bias to the review, and (3) assessing the impact of such instances on the conclusions of the review. A substantial degree of judgement was required to address these issues.

Unlike Kirkham and colleagues,10 we did not involve public health experts. However, we believe our general conclusion (that effectiveness is over-estimated when reporting bias is not considered) would not change had experts been involved. Furthermore, we only report the findings from an evaluation of within-study reporting bias in one review. However, the findings from this single evaluation provide empirical data to support the call for a prospective public health interventions study registry.11 A prospective study registry would aid the identification of unreported or incompletely reported outcomes16 and also promote the cost-effective use of research funds by reducing unnecessary duplication of studies.11 The evidence required to inform public health policy that addresses the determinants of health inequalities is complex in nature and arises from diverse sources.17 This means that a study registry modelled entirely on the structure of clinical trial registries is unlikely to be adequate, but this should not discourage efforts to develop and implement a prospective study registry suitable for public health. Retrospective assessment of outcome reporting bias as part of critical appraisal tools used in public health18 19 is also possible but will require development of a tool such as ORBIT to be more suitable for the study designs and reporting found in evaluations of public health interventions.

What is already known on this subject

Outcome reporting bias is an important issue in RCT literature and can lead to conclusions of overestimated effects.

What this study adds

  • Outcome reporting bias exists in the public health literature.

  • The exclusion of missing or incompletely reported outcomes can lead to a high risk of bias in a review and therefore the possible overestimation of intervention effectiveness.

  • A prospective public health interventions study registry would aid the identification of unreported or incompletely reported outcomes.

References

Footnotes

  • Linked article 140350.

  • Competing interests None.

  • Provenance and peer review Not commissioned; externally peer reviewed.

Linked Articles