Article Text
Abstract
Background Serious concerns have emerged regarding publication bias or selective omission of outcomes data whereby negative results are less likely to be published than positive results. This has important implications for evaluations of adverse events because conclusions based on only published studies may not present a true picture of the number and range of the events.
Our objectives were to ascertain whether we can quantify the underreporting of adverse events in the medical literature and measure the impact this underreporting has on systematic reviews of adverse events.
Methods A systematic review of studies assessing the impact of unpublished adverse events data on systematic reviews was undertaken. The PICO for this review was as follows; P (Population): any, I (Intervention); any, C (Comparisons): published versus unpublished data, O (Outcomes): number of studies, patients or adverse events, types of adverse events or odds ratios or risk ratios.
Studies were identified from 15 databases, handsearching, reference checking, internet searches and contacting experts. The search results were sifted independently by two reviewers. No quality assessment tool exists for these types of evaluations so the quality criteria were derived in-house.
Results From 4344 records, 27 methodological evaluations met the inclusion criteria. Ten compared numbers of adverse events in matched published and unpublished documents. The percentage of adverse events that would have been missed had an analysis relied only on the published versions varied between 43% and 100% (median 57%). Two other studies demonstrated that there are substantially more types of adverse events reported in unpublished than published documents.
Nine studies compared the proportion of trials reporting adverse events by publication status. The median percentage of published documents with adverse events information was 46% compared to 95% in the corresponding unpublished documents. There was a similar pattern with unmatched studies where 43% of published studies contained adverse events information compared to 83% of unpublished studies.
There were 15 meta-analyses that reported the odds ratios/risk ratios with and without unpublished data. Inclusion of unpublished data increased the precision of the pooled estimates (narrower 95% confidence intervals) in 13 of the 15 pooled analyses.
Conclusion There is strong evidence that much of the information on adverse events remains unpublished and that the number and range of adverse events is higher in unpublished than in published versions of the same study. The inclusion of unpublished data can reduce the imprecision of pooled effect estimates during meta-analysis of adverse events.