CommentaryWhat price perfection? Calibration and discrimination of clinical prediction models
References (17)
Accuracy curves: an alternative graphical representation of probability data
J Clin Epidemiol
(1989)Limited assurances
Am J Cardiol
(1989)- et al.
Should the intent of testing influence its interpretation?
J Am Coll Cardiol
(1986) Measuring the accuracy of diagnostic systems
Science
(1988)- et al.
The meaning and use of the area under a receiver operating characteristic (ROC) curve
Radiology
(1982) - et al.
Statistical approaches to the analysis of receiver operating characteristic (ROC) curves
Med Decis Making
(1984) ROC steady: a receiver-operating characteristic curve that is invariant relative to selection bias
Med Decis Making
(1987)- et al.
The measurement of performance in probabilistic diagnosis. II. Trusworthiness of the exact values of the diagnostic probabilities
Meth Inform Med
(1978)
There are more references available in the full text version of this article.
Cited by (121)
In Reply:
2021, Annals of Emergency MedicineDetection of calibration drift in clinical prediction models to inform model updating
2020, Journal of Biomedical InformaticsCitation Excerpt :Best practices addressing these later phases of the clinical predictive analytics cycle are yet to be fully developed and further research is needed to address the unique challenges of clinical environments [3,7]. One such challenge results from model calibration, increasingly recognized as critical to the success and safety of clinical deployment of prediction models [1,8–10], deteriorating over time [11–17]. This calibration drift is a consequence of deploying models in non-stationary clinical environments where differences arise over time between the population on which a model was developed and the population to which that model is applied [5,18–23].
Prediction modeling—part 1: regression modeling
2020, Kidney International
Copyright © 1992 Published by Elsevier Inc.