Background: Less than half of studies presented at conferences remain unpublished two years later, and these studies differ systematically from those that are published. In particular, the unpublished studies are less likely to report statistically significant findings, and this introduces publication bias. This has been well documented for quantitative studies, but has never been explored in relation to qualitative research.
Methods: We reviewed the abstracts of qualitative research presented at the 1998 (n = 110) and 1999 (n = 114) British Sociological Association (BSA) Medical Sociology meetings, and attempted to locate those studies in databases or by contacting authors. We also appraised the quality of reporting in each abstract.
Results: We found an overall publication rate for these qualitative studies of 44.2%. This is nearly identical to the publication rate for quantitative research. The quality of reporting of study methods and findings in the abstract was positively related to the likelihood of publication.
Conclusion: Qualitative research is as likely to remain unpublished as quantitative research. Moreover, non-publication appears to be related to the quality of reporting of methodological information in the original abstract, perhaps because this is a proxy for a study with clear objectives and clear findings. This suggests a mechanism by which “qualitative publication bias” might work: qualitative studies that do not show clear, or striking, or easily described findings may simply disappear from view. One implication of this is that, as with quantitative research, systematic reviews of qualitative studies may be biased if they rely only on published papers.
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Less than half of studies presented at conferences remain unpublished two years later, and these studies differ systematically from those that are published.1 In particular, the unpublished studies are less likely to report statistically significant findings, and this compromises the ability of systematic reviews to assemble complete, unbiased summaries of a body of evidence. This “publication bias” is often assumed to be due to journal editors and referees rejecting “negative” studies, though it may also be the result of authors not submitting “negative” studies.2 The end result is the same, however: systematic reviews may be at risk of producing research syntheses that are biased towards “positive” findings if they exclude “grey” literature (that is, literature that is not published in conventional, academic journals). Research on publication bias to date has focused on quantitative research (mainly studies reporting on evaluations of interventions), and it is not known whether qualitative research is subject to the same biases. Given increasing efforts to include qualitative research in systematic reviews, it is important to know whether qualitative research is also subject to publication bias. We therefore examined a sample of abstracts reporting qualitative research, to assess the rate of publication and to explore the existence of publication bias. We hypothesised that the rate of non-publication would be similar to that for quantitative studies.3 4
Abstracts reporting primarily qualitative work presented orally at the September 1998 (n = 110) and 1999 (n = 114) British Sociological Association (BSA) Medical Sociology meetings were reviewed. We included any primary study that was reported using qualitative methods. We excluded quantitative research, review papers, general discussion papers, methodological papers and theoretical papers that did not report empirical research. An information scientist (VH) then conducted a search of online databases (Medline and Web of Science) to attempt to locate papers in any language that had subsequently been published in academic journals. The decision on whether the paper was subsequently published (or not) was ascertained by VH and MP who examined the electronic version of title and abstract where available. Where no record was found in any database, RK contacted the authors for more information. Initial searches were carried out by VH during 2005 with RK contacting authors for whom no record could be found up until April 2006. This means that up to 54 months (for the 1998 papers) and 43 months (for the 1999 papers) elapsed between presentation and publication.
Independent of the searches (and before we knew the publication status) we coded the quality of reporting for all of the 1999 abstracts. Each was read independently by two reviewers (MP and HR) blinded to the author and institution and coded according to:
Whether the research question was stated
Whether there was a statement of the relevance or worth of the study
Whether there was a description of the context/setting for the study
Whether there was a description of the sample, or sampling procedure, and
Whether there was a description of the study findings.
Where there was disagreement we revisited the abstract and discussed reasons for any difference (for example, overlooking information in the abstract, or more commonly, lack of clarity in the abstract), and resolved these. We then examined the association between publication status, and reporting quality, using the χ2 test.
In addition, a convenience sample of 10 people involved with MedSoc conference organisation were contacted for their recollections of abstract selection. They were asked if they were able to recall the proportion of submitted abstracts accepted, whether the standard was set lower for postgraduates and whether the “rules” for abstracts (for example, structured or not) had changed over time.
The search of online databases (completed in February 2005; 64 months after presentation for the 1999 abstracts; 76 months for the 1998 abstracts) for the 224 abstracts uncovered 67 publications (30%) that appeared to match the abstract. Contact with authors uncovered a further 32 publications, giving an overall publication rate of 44.2%. This is nearly identical to the publication rate for quantitative research, which has been estimated at 44.5%.1 Responses received from those approached for their recall of abstract selection are summarised in the section on limitations. With respect to the five questions assessing quality of reporting, there was a clear statement of the research question in just over two-thirds of abstracts (n = 77; 67.5%) and a statement of the relevance or worth of the study in 43.9% (n = 50) of abstracts. A description of the context/setting for the study was included in just over half the abstracts (51.8%; n = 59), and a description of the sample, or sampling procedure in 57.9% (n = 66). A description of the study findings appeared in about half (55.3%; n = 63).
There was an association between the number of “quality items” answered positively and whether the paper was subsequently published: the rate of publication for those with a positive answer to one or fewer questions was less than 30%; for those with four or five items, it was at least 50% (χ2 = 10.5; one-sided p value = 0.03). Across all abstracts raters agreed on 71% of the “quality items”; for the remaining items where there was disagreement (29%) the relevant abstract was jointly examined and a decision made.
A few respondents gave reasons for non-publication, and these included (predictably) lack of time, as well as job moves (in some cases specifically linked to contract research); it was also reported that the paper had been presented in order to discuss it with peers rather than as a prelude to publication; other reported that they still intended to publish, and one reported that they had “lost heart” after poor reviews.
History and context from conference organisers provide some background to understanding the matter of whether one might reasonably expect abstracts to be a prelude to publication. While all organisers replied, not all of them were able to answer our queries. None of our respondents was able to provide a figure, or even an impression, on the ratio of abstracts submitted to abstracts accepted, but some recalled that until relatively recently, all abstracts had been accepted, and that one of the aims of the conference was to provide a mechanism for new researchers and in particular postgraduates to present their studies:
“There was a real sense of wanting to support people to share their work… [this conference] encouraged the mixing of new researchers with those more established….. it brought together researchers often working in isolation (eg, medical sociologists working alone in schools of medicine).”
Structured abstracts had been introduced at the point when the volume of abstracts had exceeded the time available for presentations. One senior social scientist reported: “The present competition is a phenomenon of the last 5–8 years… abstracts have become a bit more formal in recent years.”
One organiser wrote: “I’m not clear how many times they’ve actually had to use the review/selection process (initially there was a review team set up just in case it was needed).” Another added: “as time went on, the scientific panel became more structured in their approach to reviewing abstracts.”
This study found that over half of primary qualitative research studies remain unpublished, which is almost identical to the rate of non-publication for quantitative studies. This is the first time this has been demonstrated for qualitative research. Moreover, non-publication appears to be related to the quality of reporting of methodological information in the original abstract, perhaps because this is a proxy for a study with clear objectives and clear findings. Again, this has not been previously demonstrated. That qualitative studies without clear findings are less likely to be published suggests a mechanism by which “qualitative publication bias” might work; qualitative studies that do not show clear, or striking, or easily described findings may simply disappear from view. This may occur either because they are not submitted to journals by authors, or because they are rejected by editors/reviewers who are unclear about their relevance.
If this research remains unpublished because of poor conceptualisation or poor quality, this does not necessarily indicate bias. Longer lag times to publication in the social sciences have been found previously, and may be associated with higher initial rejection rates, so that manuscripts may be more likely to be rejected by several journals before being published.5 If, however, studies with nuanced or apparently complex findings are less attractive editorially than studies where the authors present or claim clear-cut results, then that is more worrying. In quantitative studies, it tends to be the direction of effect that causes most concern in relation to publication. This is not, however, the only source of bias.
One of this study’s limitations is its size; while similar in scope to comparable studies of quantitative research, the current study used data from just one UK conference, albeit one with a large sample of health-related qualitative abstracts. Moreover, conference organisers for these meetings reported the use of these meetings to specifically encourage new researchers to share work at a relatively early stage, which may mitigate against subsequent publication in its existing format; the research project may change significantly or may be dropped altogether. Presenters whose work had not been published reported using the conference in this way.
One implication of our findings is that, as with quantitative research, systematic reviews of qualitative studies may be biased if they only rely on published papers.3 Such reviews, which are becoming more common,6–10 should therefore consider seeking and including unpublished qualitative studies, as well as considering the impact that publication bias may have on their review’s findings. Approaches to dealing with publication and related biases in quantitative studies include preventing it (for example, by registering trials before they start enrolment),11 12 making strenuous efforts to find all published and unpublished work and detecting it and limiting its effects—for example, by excluding poorer quality studies,13 though the frequency of assessment of publication bias remains low even in quantitative research.14 The applicability of these approaches to qualitative research is a matter for debate. More importantly, whether the consequences of such bias are as potentially serious in qualitative as in quantitative research is itself a researchable question.
What is already known on this subject
It is known that much quantitative research presented at conferences is not subsequently published, and that the published research differs systematically from that which remains unpublished; in particular, the findings of unpublished studies are more likely to be statistically non-significant. This is known to introduce bias into systematic reviews.
What this study adds
Systematic reviews of qualitative research are becoming more common, but publication bias has not hitherto been shown.
This paper shows that the non-publication of qualitative research is as common as non-publication of quantitative studies, and suggests that published studies differ systematically from unpublished studies.
Publication bias is therefore an issue that systematic reviewers of qualitative research need to consider.
Users of systematic reviews of qualitative research may need to consider whether the reviewers have included an unbiased sample of all qualitative studies, or have carried out a comprehensive search for relevant published and unpublished literature.
We thank the conference organisers and the authors who took time to supply us with additional information about the conference and about the abstracts. We also thank Roberta W Scherer, Gemma Binefa i Rodríguez and the anonymous referee for advice and suggestions.
Competing interests: None.
Funding: This study was unfunded. MP and HT are funded by the Chief Scientist Office of the Scottish Executive Department of Health.