Article Text
Statistics from Altmetric.com
How could disclosure of interests work better in medicine, epidemiology and public health?
Biasing of studies towards results desired by the investigators—investigator bias—arises from many sources, from outright data fabrication to subtle and even unconscious bias in design and analysis choices. Investigator bias has had an important impact in some areas of clinical practice, and can be a major source of uncertainty about study effects. Among the sources of investigator bias, empirical studies have suggested that estimates of effect are often associated with funding source. Disclosure of financial ties thus supplies predictive information about study results. Although this predictability does not by itself say which results are more or less biased, it does stand as a potentially important source of study variation, and hence is needed for full uncertainty assessments. The key problem is then fair use of disclosure data by the evaluator. Fair use will require a clear understanding that predictive power at the group level should never be used for indictment let alone claim of impropriety (as has occurred). Evaluators need to be on the alert for their own biases, and if they wish to use these biases they should be given the form of a subjective prior distribution.
Many have argued that full-scale uncertainty analysis is needed before public-health or medical recommendations are made.1–4 These arguments have largely concerned formal analysis of uncertainty arising from the “holy trinity” of validity threats: uncontrolled confounding, selection bias and measurement error; model misspecification is sometimes added to the list. Occasionally, attention is drawn to more subtle (but potentially large) biases due to sparse data, regression to the mean, data dredging and other instances of method failure.5 6
Ongoing news7–13 suggests it is time to debate incorporation of another potentially major source of uncertainty into literature assessments: investigator bias, the biasing of study results towards results expected or desired a priori by the investigators. Documented incidents8–12 sustain concerns that the pool of reported study results may be affected seriously by prejudices and vested interests of investigators and sponsors. Such incidents suggest that investigator bias can be larger than any other bias, and may often encourage adoption of dubious or even deadly treatments.
As with the more mundane bias sources, prevention is better than cure, especially because cure is often impossible. Ethical guidelines from prestigious scientific bodies can support investigators arguing for ethical conduct within their research group.14 Research audits, sometimes conducted by granting agencies and home institutions, may detect egregious forms of investigator bias, such as extensive data fabrication.12 13 Audits are often not done, however, and cannot account for more obscure forms of investigator bias, such as “guiding” studies towards desired results by careful selection of design or analysis features. For example, a desire for a null result could be served by using a non-differentially noisy exposure measure over a more accurate differential measure, and by choosing exposure cutpoints that obscure the appearance of a monotone trend across categories. A desire for a positive result could be served by choosing exposure cutpoints that enhance the appearance of a monotone trend.
I will focus primarily on investigator bias associated with financial ties. Similar arguments could be made regarding bias associated only with ideological commitments of investigators. The ascertainment of such commitments however is far more intrusive and far more difficult to validate than ascertainment of financial ties, making such ascertainment much less feasible. In my closing remarks I will return to this issue.
INTENT IS NOT THE KEY ISSUE
Investigator bias may be intentional or subconscious, may be of venal or noble origin, and may manifest in outright fraud or in subtle fallacies held true by the investigator. It may reflect no more than the influence of sincere wishes or beliefs of the investigator or their research community on the way findings are interpreted,15–17 or it may represent premeditated attempts to mislead.12 13 Regardless of the origin, disclosure is a source of data that can be used to study and deal with uncertainty about the presence and likely impact of bias associated with financial ties.
Note well that I say “associated with”. Such bias, if present, may represent biased selection by the sponsor of investigators already biased towards a sponsor’s position, rather than corruption of the investigator by sponsorship. The distinction between these origins may be of lesser importance than the existence and magnitude of the bias.
Dealing explicitly with the mere possibility of investigator bias (let alone quantifying suspicions) will raise heated objections and no doubt will require great delicacy. To repeat, inappropriate study guiding may be done without malicious forethought or financial interest: certain results may seem more credible and hence more desirable than others because they fit better with the prior beliefs of the investigators.17 Alas, those guiding beliefs may appear to be unfounded prejudice to groups with different prejudices (priors) about what is credible and how analysis should proceed.
Investigator bias does not however require strong prior beliefs. Some results may seem more desirable than others simply because they create a better-sounding story from the data, provide a greater feeling of novelty or worthiness of attention or better serve the interests of the investigators or sponsor. Regardless of intent, the outcome is the same: a depiction of evidence biased towards what the investigators believed a priori, or towards what they wish you to believe a posteriori, or both.
INVESTIGATOR BIAS AND STATISTICAL BIAS
The impact of investigator bias on results resembles the impact of informative prior distributions in Bayesian analysis, insofar as both bias results in a particular direction. The resemblance is especially strong when prior distributions are chosen to reflect actual prior beliefs (which should be given contextual rationales18) or to produce desired posterior distributions (which is legitimate only as part of an explicitly reverse-Bayes analysis18 19). In Bayesian analysis, however, the use of an explicit prior enables the reader to see the form and strength of the bias, and thus judge whether they find the prior agreeable or at least plausible. Furthermore, priors may instead be chosen only to improve frequency or numerical properties of estimates (a shrinkage or ridging rationale) or to represent lack of information (an “objective-Bayes” rationale).
Regardless of whether the analysis is frequentist or Bayesian, incomplete documentation of ad hoc design and analysis decisions makes investigator bias especially difficult to detect and deal with. Furthermore, use of faulty heuristics can lead to biased reporting, even when the analyst is following accepted practices intended to thwart investigator bias.20 Examples include use of naive significance testing to decide what to control or present.
There may be no sharp boundary between faulty conventional practice and investigator bias. Indeed, many conventional practices can be viewed as the mechanical imposition of generic investigator biases. For example, conventional significance testing incorporates a bias favouring type I errors (false negatives) over type II errors (false positives), institutionalised in the ubiquitous fixed 0.05 α-level coupled with the usually overoptimistic 0.20 β-level (1−power) required in power calculations for grants. Ironically, the consequence of this testing paradigm is classical publication bias, which involves selective publication of “significant” results, leading to higher publication rates for false positives as opposed to false negatives.
Nonetheless, the bias of primary concern here arises from allowing the desired results to drive the design, analysis and presentation of the study—behaviour that can be viewed as scientific reasoning to a foregone conclusion. The direction of this bias will of course vary with the investigator; hence upward and downward investigator biases may operate in different studies within the same field.
HOW CAN AN EVALUATOR ACCOUNT FOR POSSIBLE INVESTIGATOR BIAS?
When biased reporting occurs in literature reviews, it may be discovered and corrected by going back to the original sources (which in extreme cases may reveal the literature shows the opposite of what was claimed21). But most studies generate and analyse single data sets. In these cases one cannot detect unreported biasing judgements without having the data and the time to reanalyse it. It is then that disclosure can provide a clue of the direction of investigator bias, if the latter is present.
For many of us, discovering that authors have financial ties to parties vested in the results will influence our bet not only that investigator bias is present, but also our bet on its direction. In this sense, we would regard the financial disclosure as informative. Nonetheless, such disclosure also feeds into our own biases. A major question is then: how can we use such information transparently if not fairly in our own uncertainty assessment?
From a subjective-Bayesian perspective, a goal of an evaluation is to compose a well-informed opinion, bet or judgement in which all the components are combined in a logically consistent and probabilistically coherent manner.18 Initial judgements are represented by prior distributions. In this framework, one could construct a prior distribution for the impact of investigator bias. In a manner analogous to constructing a prior distribution for a selection-bias multiplier,3 4 this construction could use available data and plausible hypotheses about the direction of investigator bias.
Priors about investigator bias will typically have a very asymmetric form: we expect investigator bias, if present, to fall in the direction favouring their or their sponsor’s financial interest. It is here we encounter an emotional minefield of indignation driven by fears of prejudice in and abuse of such priors. Similar objections meet proposals to down-weight studies that produce results favourable to sponsors or other parties with financial ties to the investigators.
Yet the asymmetry reflects a harsh reality. In research funded or conducted by parties with a direct financial stake in the results, there may be an implicit threat of funding withdrawal (or worse) if results presented do not favour the sponsor’s interest or conform to their perceived needs for marketing and litigation. As an example from my own experience, Dow-Corning sponsored a series of rather crude insurance database studies of implanted medical devices and various outcomes,22–24 making clear that they did not want any examination of the key product under litigation, silicone breast implants.
This condition was not an issue for the first study, because cosmetic breast implants were not covered by insurance and too few reconstructive implants were present to provide a useful analysis. There were however sufficient numbers of reconstructive breast implants in the final study, so we included them in the analysis. Those breast implants exhibited positive associations with the very diseases that had been seen associated with breast implants in previous studies.24 Upon being sent the manuscript, the sponsor’s reaction was quite unlike previous reactions (which were almost nothing when breast implants were not examined): the sponsor immediately hired a physician to vociferously critique the study and dismiss the results, which helped provide a rationale for funding discontinuation. This occurred even though the third study (despite clear weaknesses) was better than the first two.
I feel fortunate that funding termination was all that happened to me. Sponsored investigators have been sued by their corporate sponsors in efforts to block publication of undesirable results.25 Investigators with no corporate ties have faced interference from corporations upset with their findings.26 27 Not all investigators choose to fight such pressure, which leaves them the option to either not publish unfavourable results or else not produce such results. Some go so far as to guarantee their sponsor that the results of their proposed study will be helpful.10 Meanwhile, unfavourable results from research conducted within a company may languish as in-house memos that can be obtained only via litigation, and then only if the court orders their release.11 28
Such experiences would lead many (myself included) to assign a distribution for bias in sponsored published studies that is shifted toward the sponsor’s interest, when that interest is known and the results favour the sponsor. The consequence of using such a prior in a meta-analysis will be to shift results away from sponsor interests.
If however the results of sponsored studies are more accurate, use of a “sponsor-suspicion” prior would bias the meta-analysis away from the truth. The wisdom of such use can therefore be questioned. On the one hand, there is some consonance between such “sponsor-suspicion” priors and meta-analyses of the association of funding sources with study design and conclusions29–37 (interestingly, these associations are discussed more optimistically in those meta-analyses disclosing ties to the sponsors in question). By themselves, however, associations of sponsorship with study conclusions do not show which statistical results are the more accurate. Only features of the design, execution and analysis do that. While some of these features (especially design) have been found associated with sponsorship, those features can be examined directly in a meta-analysis, at least to the extent they are accurately reported.
Compared to a “suspicion” prior, a less contentious starting point for evaluation of possible investigator bias is direct comparison of studies according to their financial ties (as in References 29–37) and other possibly relevant characteristics, such as past assertions or findings by the investigator. Such comparisons are no different than comparing studies according to characteristics that may be related to confounding, selection bias or measurement error. Again, the presence of important differences does not demonstrate that one group of studies is biased. It does however illustrate the existence of a source of study variation that should not be neglected in a full uncertainty assessment.
There may also be a role for statistical forensics in aiding evaluation of investigator bias. For example, suppose that after conducting a hundred studies a research organisation or institute has never reported a 0.05-level significant result unfavourable to its clients or sponsors. Such a history should raise suspicions of investigator bias: even if there were no real effects among anything the organisation studied, chance alone should have produced unfavourable results. If unfavourable effects sometimes exist, they should have produced additional unfavourable results from neutrally designed studies. Thus the absence or extreme rarity of results unfavourable to sponsors could be considered evidence of investigator bias in studies from the organisation or institute. These arguments will be especially forceful when study results may influence subsequent funding or employment of the investigators.
QUALITY OF DISCLOSURE INFORMATION
Thus far I have argued that financial disclosure is informative, making it a legitimate covariate for meta-analyses. Similarly, the squared difference among studies with and without ties to vested parties is a legitimate variance component for an uncertainty assessment. Once we accept the reality that disclosure data contain relevant statistical information, we can turn to the quality of information ascertainable from available reports.
There are reports of investigators disclosing only a small fraction of their financial support from industry, even when required to give a full account by university and granting-agency rules.8 9 There is little reason to expect more thorough disclosure in journal articles. Full disclosure is often evaded via technicalities, such as payment of non-descript consulting fees that are not explicitly earmarked for the article and thus not deemed reportable, even when they may be for time spent on the database from which the article is generated. If reported at all, such funds may be described only by phrasing such as “author X has served as a consultant (or expert) for company Y”, not that the payments for such services were made with the understanding that they would support time spent on the study being reported.
The frequency of under-reporting and deceptive reporting is far from known. Most incidents come to light only when investigation or litigation occurs, and those discoveries are not always reported in public venues. Other devices for undermining disclosure requirements include funds channelled through innocuously titled foundations or institutes set up specifically to conceal the ultimate funding source. Detecting such events and devices may demand investigative effort beyond the resources of almost all readers.
These events do not demonstrate that the resulting research is biased. They do however suggest that determination of financial ties from acknowledgements may miss a considerable proportion of important ties. In contrast, it is hard to imagine that many articles report funding from a financial stakeholder when in fact there was none, so that misreporting may be limited to under-reporting. Hence I would expect the net impact of under-reporting to be dilution of observed differences between articles reporting financial ties to stakeholders and those that do not.
Because this dilution is of unknown extent, its recognition should add more uncertainty to the final assessment. This uncertainty might be represented via a prior distribution on the unknown under-reporting rate, which would be analogous to a prior on the false-negative rate in bias analysis.1–4 Under-reporting may differ across journals depending on disclosure requirements; thus journal requirements would constitute a relevant covariate for the analysis.
FAIRNESS ISSUES
A major objection to use of disclosure information in the way I have described is that it can appear to convict the innocent along with the guilty. Thus extreme care must be taken to interpret funding source as merely suggestive of what investigator biases we would expect if such biases were operative. Of course, many will object to this conditional interpretation as well.
An analogous controversy in law enforcement concerns profiling, as when young Arab men are more likely than old Scottish ladies to be subjected to detailed security screening at an airport. Enhanced suspicions of certain demographic groups are based on past events and frequency of motives; selective airport screening based on these suspicions inconveniences but does not convict the innocent. Objections to profiling stem from values that elevate equal treatment before the law above certain safety concerns. Due to dramatic variations in values, there are sharp disagreements about what can be morally justified; hence no answer can please everyone.
In a similar fashion, bets about the frequency and direction of investigator bias in studies of a drug under litigation will be vastly different depending on whether the studies are funded by the manufacturer, or by the US Food and Drug Administration or by plaintiffs. My use of my own bets in an evaluation may provoke vehement condemnation from those who have vastly different bets or who morally object to profiling. Still, if I am trying to produce an honest assessment for myself, I will use my bets; those who hate my bets or methods will simply have to create their own bets and evaluation.
By refusing to profile studies based on characteristics such as financial ties or interests, one will produce an evaluation that assumes any investigator bias related to these characteristics has had no impact. If there is no empirical evidence or logical argument to warrant near-certainty that investigator bias is negligible, the resulting evaluation will be overconfident, insofar as it places unwarranted trust in the investigators. Policy based on the evaluation will thus be vulnerable to the blind trust errors that have affected physicians who relied on fabricated trials12 or on distorted reporting from drug companies or academic publishers in their hire,27 38 patients who relied on those physicians and investors who relied on under-regulated financial trusts. In light of these and other hard lessons about complete trust, excluding any information about the possibility of investigator bias seems irrational and dangerous, no matter how noble the reason.
PRACTICAL OBJECTIONS
Apart from ideological resistance, some objections to accounting for investigator bias resemble objections to more mundane sensitivity analysis and uncertainty assessment of ordinary biases.1–4 Typical objections are that there are too many unknowns, the problem is too complicated, the proposed solutions are too subjective and too subject to personal biases to be useful, and so on. These objections have yet to be accompanied by evidence that the current norms (ignoring the problem or making purely intuitive declarations about it) are superior for science or public health.
The objection of too many unknowns and too much complication is the usual response to difficult problems of any sort. It does remind us that any evaluation we come up with may easily be wrong no matter how much effort we expend, and so we must consider the potential benefit of our effort before embarking on the analysis. In doing so we should recognise that the best we can hope from such an analysis is a refinement of our own subjective judgement, not discovery of a scientific truth.
I find the objection of extreme subjectivity in the results of bias analysis to be no objection at all. Even though I regard the idea of singular truth as fundamental to science (just as it is to religion), I also think everything we claim to be knowledge is subjective and hence vulnerable to personal bias. This epistemic subjectivity is another harsh fact of life that the health sciences will have to accommodate as they mature.16 17 39 The process requires recognition that the illusion of objectivity is buttressed by rigid statistical conventions that prey upon and feed human cravings for certainty.40 It also requires recognition that these conventions have profound biases and value judgements (eg, favouring false negatives over false positives) built into their core. These biases and values are not shared by all stakeholders in methodological, subject matter or ethical debates.
CLOSING REMARKS
Investigator bias has the potential to overwhelm all other biases. It can become the dominant force in contexts (such as expert reports and testimony for litigation41) in which the restraints imposed by editors and peer review are absent. Thus, because of its importance, investigator bias should not be dismissed as unapproachable, any more than we should give up research on what seem to be hopelessly mysterious diseases.
Although I have focused on distortion from financial input, I have no doubt that ideological commitment can be just as distortive. Indeed, dealing with ideological and other psychological sources of bias (such as commitment to earlier methods and conclusions) deserves to be on the research agenda for bias analysis. The story of environmental tobacco smoke may provide an excellent motivator in which the ideological bias rivals religious fervour.42 It also stands as a cautionary example in which knowledge of funding source was abused, in that it was used to attack studies directly rather than as merely a potential predictor of literature results.42 43 In approaching these problems scientifically, I suspect aid will be needed from those branches of psychology that deal with distorted thinking among normal individuals and groups.44 45
Openness of journals to unpopular research and strenuous debate remains the primary line of defence against information abuse and distortion. Such openness is not universal, and thus it seems likely that proposals to deal with investigator bias will raise a hornet’s nest of objections and questions. Perhaps the essential first step (if not major hurdle) is getting used to the idea that there may be good arguments for including the possibility of investigator bias in an uncertainty assessment. The demand for thorough financial disclosure may then seem no different from demanding hard (“objective”) details of design and analysis to aid assessments. Like other psychological constructs, ideological biases will be far more difficult to evaluate but may nonetheless have hard correlates that signal their presence if not magnitude, such as attempts to suppress or boycott opposing views.42 43
Dismissive or ideological objections to dealing with investigator bias will do no more good than it does for handling misclassification, selection bias or unmeasured confounders. Given disclosure data, one can do basic analyses to see if reported financial ties are predictive of results or conclusions. Personal evaluations can explicate priors about investigator biases. Statistical forensics can be used as opportunities arise. And perhaps most importantly, critical comparison of study methods and data versus study conclusions may alert one to grey-zone behaviours that would probably not be caught by audits (eg, guiding of studies towards results the investigators wish to find).
Acknowledgments
I wish to thank Jan Vandenbroucke, Alfredo Morabia, Jay Kaufman and Katherine Hoggatt for their comments on the first draft of this manuscript.
REFERENCES
Footnotes
Competing interests: None declared.
Disclosures: The author does consulting for both plaintiffs and defendants in litigation involving epidemiologic and statistical evidence, has done a number of studies that were motivated by such litigation, and has done a number of industry-sponsored studies.