ArticlesLeague tables and acute myocardial infarction
Introduction
The introduction of measures of performance in the public services of some countries1, 2, 3 has been aimed at enhancing their accountability.4 The argument has been that such measures will identify the extremes of performance,5 which will permit a more detailed investigation of providers who seem to perform badly. Reference to average performance is an important feature of these measures. The raising of performance of hospitals that compare badly, or the identification of those aspects of practice with better than average outcomes, should lead to overall improvements. If comparisons are to be made, however, like must be compared with like and criticism should be of outcomes that can be influenced by those responsible for them. There are several uncertainties about whether these conditions can yet be achieved for the measures of performance and outcome that have so far been promoted.
We explored these uncertainties for deaths after admission to hospital in Scotland, UK, for acute myocardial infarction to find the pitfalls of simple comparisons. In 1994 and 1995, the Scottish Office Clinical Resource and Audit Group (CRAG) published reports6, 7 intended to improve standards of care by focusing attention on disparities in outcome measures. Despite the assurance that “it would be wrong to conclude… that one hospital provides better treatment than another”, part of their perceived use was that “… Health Boards and fund-holding GPs [general practitioners] purchasing treatment from that Trust (which appeared to fare badly) will have questions to ask”.6 Although not presented as league tables, the reports' findings enabled comparisons8 and were represented as such in the general press,9 with lists of the “worst five” trusts, which raised other questions about the ethical basis of their publications.10
There are several reasons for criticising these findings and for proposing more reliable comparisons that have implications beyond this example.11 In the 1995 CRAG example, the data related to patients discharged in 1991–94, and, therefore, could not take account of changes in care of patients within this period.12 Differences between hospitals in the characteristics of the patients they admit and the risks of mortality that variation in case mix may represent need to be taken into account.13, 14 Case-mix variations may be differences in the aggregate of the patients who were treated, as well as access to hospital care and other features of the communities in which patients live. Morrison and colleagues15 have shown that, in north Glasgow, Scotland, people who live in an area of low socioeconomic status are at higher risk of myocardial infarction, have less chance of reaching hospital alive, and are more likely to die soon after infarction. There is no basis for the a priori assumption that outcome measures can be devised along a single dimension. Measures are likely to differ for subgroups of patients within a single broad category—when the variety in numbers of secondary diagnoses and patients of different ages and sexes, and previous medical histories are all likely to influence outcomes and, therefore, should be taken into account when making allowances for case mix.
We analysed these differences to develop more rigorous measures of hospital performance. We took into account differences in hospital case mix and the characteristics of the populations the hospital serve. We included the influence of deaths out of hospitals on those in hospital, and made allowances for the ways in which outcomes may vary for different subgroups.
Section snippets
Methods
We took data from the Scottish system of hospital discharge records. The Information and Statistics Division of the National Health Service in Scotland links these records for 1981-95 and added information about all Scottish deaths provided by the General Register Office for Scotland.16 These data provide information about the hospital care of individuals within geographically defined populations. We used data for 1993, when 22905 individuals were admitted to hospital or died with a principal
Results
The first part of the hospitals' study adopted an orthodox analytical approach (logistic regression) to standardise case mix for age, sex, and a history of previous hospital admission (figure 1). Standardisation for previous admission was done separately for each of: acute myocardial infarction, other ischaemic heart disease, diabetes, other disorders of the circulatory system, respiratory disorders, cancer, and other causes of admission. The results are presented as the apparent effect of
Discussion
Many of our findings will confirm clinicians' appreciation that the content of the care provided by hospitals varies from one area to another, that hospitals differ in ways other than care offered, and that comparisons between hospitals should reflect these differences. At the same time, to suggest that the outcome of substantial public investment should be monitored and assessed is not unreasonable. An added argument is that, if clinical guidelines and the audit of their application are to
References (30)
Clinical indicators for hospitals announced
BMJ
(1997)- et al.
Report cards on cardiac surgeons: assessing New York State's approach
N Engl J Med
(1995) Performance reports on quality—prototypes, problems, and prospects
N Engl J Med
(1995)The use of performance indicators in the public sector
J R Stat Soc A
(1990)Focus on performance indicators
BMJ
(1988)Clinical outcome indicators, 1993
(1994)Clinical outcome indicators, 1994
(1995)Scottish death rates published with health warning
BMJ
(1994)Report details patient survival rates
Tlie Scotsman
(1996; Feb 17)- et al.
Freedom of information: towards a code of ethics for performance indicators
Res Intelligence
(July, 1996)
On the unintended consequences of publishing performance data in the public sector
Int J Public Admin
Can you measure performance?
BMJ
Mortality league tables: do they inform or mislead?
Qual Health Care
Risk adjusting health care outcomes: a methodological review
Med Care
Effect of socioeconomic group on incidence of, management of, and survival after myocardial infarction and coronary death: analysis of community coronary event register
BMJ
Cited by (75)
Evaluating integrative medicine acute stroke inpatient care in South Korea
2018, Health PolicyCitation Excerpt :By partitioning the sources of variation between levels, more precise estimates of organization-specific effects can be calculated [8,9]. HLM is therefore the recommended method to adjust for the potential cluster effects that a hospital may have on patient outcomes [10–12]. Fig. 1 outlines the conceptual model used in the statistical analyses.
Profiling hospitals based on emergency readmission: A multilevel transition modelling approach
2012, Computer Methods and Programs in BiomedicineCitation Excerpt :The uncertainty in hospital rankings motivated them to simulate plausible estimates (95% confidence intervals) using Markov chain Monte Carlo techniques, ensuring that sampling variability could and indeed does have an impact in the hospital rankings. Leyland and Boddy [32] compared the performance of a multilevel model with that of a logistic regression model in the case of care provided by hospitals in Scotland for mortalities from acute myocardial infarction (AMI). They found significant differences in rankings between the two methods, indicating the dangers associated with simpler ways of taking confounding variables into account.
Measuring and Managing Quality in Hospitals: Lessons from a French Experiment
2005, Advances in Health Care ManagementMultilevel Modelling for Public Health and Health Services Research: Health in Context
2020, Multilevel Modelling for Public Health and Health Services Research: Health in Context