Aims To determine whether economic evaluations are subject to publication bias by considering whether economic data are as likely to be reported, and reported as promptly, as effectiveness data.
Methods Trials that intended to conduct an economic analysis and ended before 2008 were identified in the International Standard Randomised Controlled Trial Number (ISRCTN) register; a random sample of 100 trials was retrieved. 50 comparator trials were randomly drawn from those not identified as intending to conduct an economic study. The trial start and end dates, estimated sample size and funder type were extracted. For trials planning economic evaluations, effectiveness and economic publications were sought; publication dates and journal impact factors were extracted. Effectiveness abstracts were assessed for whether they reached a firm conclusion that one intervention was most effective. Primary investigators were contacted about reasons for non-publication of results, or reasons for differential publication strategies for effectiveness and economic results.
Results Trials planning an economic study were more likely to be funded by government (P=0.01) and larger (P=0.01) than other trials. The trials planning an economic evaluation had a mean of 6.5 years (2.7–13.2 years) since the trial end in which to publish their results. Effectiveness results were reported by 70%, while only 43% published economic evaluations (P<0.001). Reasons for non-publication of economic results included the intervention being ineffective, and staffing issues. Funding source, time since trial end and length of study were not associated with a higher probability of publishing the economic evaluation. However, studies that were small or of unknown size were significantly less likely to publish economic evaluations than large studies (P=0.001). The authors' confidence in labelling one intervention clearly most effective did not affect the probability of publication. Where both effectiveness and cost-effectiveness data were reported (28 simultaneously), the mean delays from the trial end to publication were 2.5 and 3.0 years, respectively (P=0.001). The median journal impact factor was 1.6 points higher for effectiveness publications than for economic publications (P=0.02). Reasons for publishing in different journals included editorial decision-making and the additional time that economic evaluation takes to conduct.
Conclusions Trials that intend to conduct an economic analysis are less likely to report economic data than effectiveness data; most economic evaluations remain unpublished after 6.5 years. Where economic results are published, a delay of 0.5 years following publication of effectiveness data is observed. These results suggest that economic output may be more susceptible than effectiveness data to publication bias.
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.