Article Text

Download PDFPDF

Creating a demand for bias analysis in epidemiological research
  1. Matthew P Fox
  1. Dr M P Fox, Department of International Health, School of Public Health, and Center for International Health and Development, Boston University, Boston, USA; mfox{at}bu.edu

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

In 2005, Ross and colleagues found a protective association between maternal multivitamin supplementation during the periconceptual period and acute lymphoblastic leukaemia among children with Down’s syndrome (OR 0.51; CI 0.30 to 0.89).1 In their discussion they noted, “Maternal vitamin supplementation was collected by self-report, which is subject to…recall bias. However, a validation study…reported excellent agreement...” The validation study appears to imply that the bias from misclassification should be minimal. But given the number of false-positive published research findings,2 which are partly explained by issues of misclassification, the discussion of the potential misclassification raises more questions than it answers.

First, we want to know what is the expected magnitude and direction of the bias. Second, given the potential bias, how we should interpret the 95% CI which is calculated assuming no bias?3 And how should we as readers integrate ideas about the magnitude and direction of the bias into our interpretation of the results. For example, if we believe the misclassification was more substantial, do we shift the point estimate in our head, and if so, how far? Or do we instead widen the confidence interval to include a broader range of results? These mental adjustments are difficult to make even for those with intimate knowledge of the subject matter, yet the discussion of the bias gives us little guidance.

What would be more helpful than qualitative speculation is a picture of the total error in the study, an interval whose width and central tendency account for both the systematic and the random error. As students of epidemiology, we learn it is essential to quantify the impact of random error in our studies, but when it comes to bias we are often taught to avoid it or describe it. The result is that as consumers of research, we are generally left to speculate on or ignore the impact of bias.

In this edition of the journal, Jurek and colleagues4 use bias analysis to elevate the misclassification discussion in the Ross study from qualitative judgement to quantitative analysis (see page 168). Bias analysis methods510 use mathematical formulas to relate the observed study data to the hypothetical true data had their been no bias.5 We can then use validation data or educated guesses about the magnitude of the bias to “correct” for it. Jurek and colleagues made assumptions about the rates of misclassification of multivitamin use to adjust the observed data. They made these corrections probabilistically by randomly sampling thousands of times from distributions describing the misclassification rates. With each set of parameters chosen, they “corrected” the data giving them a distribution of corrected estimates that they summarised into a 95% simulation interval. They demonstrate that, assuming moderate false-positive and false-negative rates of multivitamin use (scenario 2: 0–20%, correlation 0.8) the median corrected estimate was 0.42, suggesting the expected truth was further from the null than observed. However, the 95% limits of the distribution of corrected estimates (0.32–0.52) include estimates closer to the null.

Assuming we agree with their bias parameters distributions, this analysis should do three things: (1) increase our confidence in a protective association of multivitamins, as most of the corrected estimates are more extreme than the observed point estimate; (2) increase our belief in a stronger protective association than was observed as the median of the corrections is further from the null than the observed; and (3) reduce our confidence in the precision of the original results, as the lower limit of the simulation interval accounting for systematic error (0.32) is nearly the same as the traditional lower 95% confidence limit (0.30), even before accounting for random error.

While the simulation interval doesn’t depict the total study error (it still ignores random error), using bias analysis the authors have elevated the discussion of the bias from the qualitative to the quantitative. Although bias analysis may be criticised as subjective, it seems no more subjective than calculating a frequentist confidence interval as if there were no bias. If I disagree with the authors’ chosen parameters distributions, I can conduct my own bias analysis. But while qualitative judgments about bias are hard to refute or prove, quantitative assessments of bias can be interpreted, debated and refined.

There is little doubt that bias analysis is a necessary component of epidemiological analysis, but there is currently little incentive for authors to include bias analyses in their work. So how can we ensure that bias analysis gets the same attention as assessments of random error? First, we must continue improving these methods and teach them to students in familiar software packages. Second, as reviewers and editors of journal articles, we must demand bias analyses, particularly in cases where a study reports a precise finding with a likely bias. Only when bias analysis becomes routine will we be able to effectively portray and interpret the total error in our work.

REFERENCES

Footnotes

  • Competing interests: None declared.

Linked Articles