Background Increase in test use over recent decades has occurred despite disappointing results from test accuracy evaluations. Difficulties with understanding and application of test accuracy information are purported to be important contributors to this observed evidence ‘gap’. Empirical research to date is based on the premise that formal probability revision is a necessary pre-requisite for informed diagnostic decison making and is characterised by self selected samples with recent experience or expertise in test evaluation. The survey aims were to describe how clinicians apply existing test accuracy metrics for diagnostic decision making.
Methods An incentivised, electronic survey was used. Informed application of test accuracy information was evaluated by asking respondents to indicate their management decision following presentation of nine different representations of the same test accuracy information to a common hypothetical scenario. Quantitative and qualitative synthesis was employed based on closed and open responses to management decisions.
Results A total of 204 General Practitioners (response rate 95%) did not appear to be self-selected on the basis of academic position, involvement in policy or experience in test evaluation. Sensitivity and specificity, the annotated 2x2 diagnostic table and predictive values were reported to be familiar metrics by the most respondents. Likelihood ratios the DOR and AUC were familiar to less than 1/3 of respondents. Application of test accuracy metrics resulted in marked variation in responses to both positive and negative test results although greater inconsistency and management uncertainty was observed following presentation of a negative test result. Formal probability revision was not a feature of the diagnostic decision making process. Test errors (false negatives and false positives) were prominent as part of the translational pathway from quantitative summary estimates of test accuracy to management decisions. Summary measures that separate the two dimensions of test accuracy in the absence of prevalence information (for example sensitivity and specificity) appeared to result in a misplaced emphasis on one or other of false positive or false negative test errors. Presenting test accuracy data using the 2x2 diagnostic table or a pictograph attenuated this effect.
Conclusion Choice of test accuracy metric appears to have a profound effect on diagnostic decision making. Understanding, contextual factors and motivational biases are likely to be contributing factors to the observed variability. It is unclear to what extent any advantage of test accuracy metric for informed decision making is based on familiarity as opposed to their intuitive nature. Simultaneous illustration of both dimensions of test accuracy in order to facilitate informed diagnostic decision making requires further exploration.
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.