Article Text
Statistics from Altmetric.com
Epidemiologists face a permanent challenge towards improving the design, analysis, interpretation, and reporting of observational and evaluative studies.
Based on the analysis of a sample of articles that reported the results obtained in observational epidemiological studies published in major journals, Pocock et al1 raised “serious concerns regarding inadequacies in the analysis and reporting of epidemiological publications”, thereby intensifying the outcry regarding the low quality of these papers and the need for guidelines regulating their publication.2 Moreover, epidemiological studies that fail to identify important associations or in which associations that were reported are later shown in effect not to exist abound in the literature. These errors often result in serious consequences, for example, when they involve the evaluation of interventions that cause adverse effects on human health. One recent case refers to the association between hormone replacement therapy (HRT) and cardiovascular disease (CVD), in which various observational studies systematically pointed in one single and erroneous direction—to the protective role of HRT in the occurrence of CVD.3 Surprisingly, two recently published randomised controlled trials (RCT) showed completely contrasting results—a harmful effect of HRT on the occurrence of CVD.4,5 There is no doubt that this “mistake” has had serious implications in women’s health as for several years millions of women worldwide were prescribed with HRT without doctors and patients being aware of the harm it could cause.
As epidemiological knowledge is predominantly built on results obtained from observational studies, the implications of wrong findings originating from these studies have been constraining both for epidemiologists and for those who make use of epidemiological knowledge. Consequently, demands are quite rightly being made for reconsideration of issues concerning the relation between the outcomes of experimental and observational epidemiological studies. Discussions have covered conceptual and methodological aspects and more pragmatically the implications of the two different approaches in medicine and public health.
With respect to the evaluation of medical technologies (medicines, vaccines, etc), there is a general consensus that RCTs are the gold reference standard; however, this consensus is commonly extrapolated to the idea that, just as medical interventions, public health ones not submitted to randomised trials are unworthy of consideration as such, and it is recommended: “to reject the scientific double standard of what constitutes acceptable evidence of efficacy for clinical versus public health interventions”.6
This question, despite all its importance for public health practices, has received sparse attention from epidemiologists and public health practitioners. While the modern practice of medicine is centred on interventions developed as a result of biomedical research, the same does not occur in public health. The central aim of any public health intervention must be to modify the health of populations, eventually by reducing the harmful effects of morbid occurrences but principally by reducing the occurrence rates of these events—that is, their incidences. With respect to the goal of reducing incidence, biomedicine so far has made available to public health a set of interventions that, although relevant, are limited to only a few of the problems tormenting the health of the human populations. Vaccines are perhaps the most important biomedical technology offered to the public health intervention arsenal in present times; however, they are restricted to the infectious diseases and to just a few of them. The great majority of potential public health interventions whether behavioural, environmental, or social that could have a modifying effect on the population health in terms of a reduction in the incidence of specific or unspecific morbid events are outside the sphere of biomedicine.
Another no less important aspect is that public health interventions, even if applicable in individuals, need to be applied over populations to be effective. To achieve their aims they have to be organised into programmes running in the frame of established health policies and at the same time to have several other characteristics apart from efficacy. For instance, for a vaccine to be used on an immunisation programme, besides efficacy, several other related characteristics (costs, logistics, secondary effects, adverse effects, cross immunity, etc) need to be established. These characteristics taken together comprise what is referred to as effectiveness.7 Effectiveness is therefore a measure synthesis of a pool of elements (including the vaccine efficacy) and is best estimated in evaluative studies (eventually RCTs) in which the unit of intervention and analysis are populations and not individuals.8 In summary, as far as public health is concerned, studies evaluating a immunisation programme effectiveness are just as important as those centred on evaluating the efficacy of the vaccine used in the same programme. As a consequence, an immunisation programme using a highly efficacious vaccine may fail to be recommended to a population, because of a not so high or even low effectiveness.
In situations in which the interventions do not involve biomedical technologies, as is the case of most of the public health interventions, it is very difficult to evaluate the real efficacy of such interventions. In many cases the intervention only exists as a programme or a policy. In such cases, the only resource left is to estimate effectiveness as there is no efficacy to be estimated. Although sometimes possible, it is operationally difficult to carry out randomised studies to estimate the effectiveness of such interventions and the only remaining possibility in many situations being observational studies or non-randomised quasi-experiments.9 This perspective in evaluating public health interventions brings us closer to the traditions of evaluating social interventions programmes and policies in general, in which it is possible, while not always feasible, to use randomised designs instead of the non-randomised quasi-experiments or observational alternatives.10
The requirement to establish the efficacy of biomedical technologies by means of RCTs was a great achievement in terms of offering more effective and safe treatment options to the population. However, the automatic transferal of this principle to public health is based on the belief that public health is merely an extension of medicine and consequently that their interventions mean biomedical interventions applied to populations. Sustaining this false assertive and believing that the evaluative standard of the public health interventions always are RCTs, would make it unfeasible for public health to propose interventions in areas such as the environment, education, behaviour, and principally social interventions such as those concerning health inequalities.11
For epidemiologists and others involved in the evaluation of the impact of public health interventions these misunderstandings and just criticisms must be interpreted as renewed opportunities to reaffirm the permanent challenge towards improving the design, analysis, interpretation, and report of the observational or quasi-experimental evaluative studies.
Epidemiologists face a permanent challenge towards improving the design, analysis, interpretation, and reporting of observational and evaluative studies.
Linked Articles
- In this issue