Article Text

Download PDFPDF

Good intentions and received wisdom are not enough
  1. SALLY MACINTYRE,
  2. MARK PETTICREW
  1. MRC Social and Public Health Sciences Unit, University of Glasgow, 4 Lilybank Gardens, Glasgow G12 8RZ
  1. Professor Macintyre (sally{at}msoc.mrc.gla.ac.uk)

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

There is a common view among social and public health scientists that there is an evidence-based medicine (EBM) juggernaut, a powerful, naive, and overweening attempt to impose an inappropriately narrow and medical model of experimentation onto a complex social world. We have both frequently come across hostility among social scientists, and public health or health promotion practitioners or theorists, to attempts to apply EBM principles (for example, systematic reviews or experimental designs) in social or public health settings (for example, sex education in schools, health promotion campaigns, or community development1). We believe such hostility to be misplaced, and to be based on a number of misconceptions.

The first misconception is that systematic reviews and experimental designs have a wholly biomedical provenance. As Ann Oakley has pointed out, the use of experimental designs was well established in United States by the 1930s, and from the early 1960s to early 1980s there were many randomised experiments for evaluating public policy interventions in United States, these being considered the optimum design. Much of the early literature on experimental designs (including blinding) came from the social sciences, as a response to the perceived need to be able to make valid causal inferences.2

The second misconception is that the “real world” is too complex, messy, or culturally/historically specific for the appropriate application of EBM principles. Objections on the grounds that experimentation is often unethical or impractical in real life are common; however, experimental evaluation is more common in social settings than is often realised,3 4 and many apparent practical or ethical difficulties can be overcome. For example, a review by Berk, Boruch and colleagues describes RCTs of the effects of prison rehabilitation programmes, “welfare-to-work” type income supplements, electricity pricing as means of managing demand, and of the educational effects of the children's programme “Sesame Street”.5

The third misconception is that social and public health interventions do not have the capacity to do harm, and that having good intentions is therefore a sufficient basis for policy making. There are enough examples of well meaning interventions with adverse effects to suggest that this is not the case. A weekly exercise programme among nursing staff, expected to have beneficial effects on fitness and musculoskeletal problems, had only one significant effect: it interfered with their ability to plan their work.6 A bicycle safety education programme (“Bike Ed”), designed to reduce cycle injuries in children, actually increased the risk of injury overall, significantly doubling the risk of injury in boys.7 Another well intentioned intervention, the use of toughened pint glasses in bars as an injury prevention measure, might reasonably be expected to be effective. In fact, a recent RCT carried out in 57 bars in England and Wales found that the injury rate increased by about 60%, because the toughened glassware shattered more easily.8 There are many other examples of well meaning interventions whose harms outweigh the benefits.

The fourth misconception is that it is adequate to know that some intervention does good in general, and that it is not necessary to know how much good, at what cost, via what mechanisms, or for which subgroups of the population. However, the answers to these questions are particularly important for policy makers. Currently, the British government is committed both to improving population health and reducing inequalities in health. One important benefit of well controlled studies is that they can identify unintended harms and benefits, and can analyse the differential impact on different population groups. For example, the “Bike Ed” intervention was found to be particularly harmful in younger children, children from families with lower parental education levels, and children lacking other family members who bicycle.7 Although “Sesame Street” benefited all children, the gap between fast and slow learners actually increased.5

The fifth misconception is that plausibility is a sufficient basis for policy making. It may seem obvious that lying infants to sleep in the prone position is a good idea because this position mimics the recovery position and should reduce the likelihood of choking or inhaling vomit; but studies from a number of countries now suggest that advice to do this in fact placed babies at greater risk of sudden infant death syndrome (SIDS).9 Even when a range of laboratory, epidemiological and other data point to the likely benefits of an intervention, for example, of dietary supplementation with vitamin A, controlled intervention studies can demonstrate unanticipated adverse results.10

The sixth misconception is that experimental methods may underestimate the benefits of interventions because they define their outcomes too narrowly or take too short-term a time frame. This may relate to Oakley's observation that one reason for the decline in the use of randomised experiments in the United States might have been that they tended to show the interventions to be relatively ineffective (or to do harm).2 Certainly experimental and observational estimates of the effects of social interventions can differ markedly (for example, observational studies of adolescent pregnancy prevention interventions result in more optimistic estimates of effectiveness compared with RCTs11). However, one important reason for systematically evaluating interventions is that the wider beneficial effects of some interventions are not always obvious, either because they often remain unmeasured, or are overlooked. In other cases, benefits may be overlooked until enough rigorous evaluations are available to demonstrate these impacts. In the United States in the 1980s there was uncertainty among politicians about the effectiveness of a supplemental food programme for women and children. However, a synthesis of good quality evaluations showed that it had modest positive effects on birth weights.12Other, and sometimes unintended, positive effects may only be convincingly demonstrated in large, prospective, well controlled intervention studies.

We suggest that the antipathy towards evidence-based principles in social science and public health is often based on misunderstandings about the principles of evidence-based policy; reluctance to accept that well intentioned interventions may do more harm than good, or be ineffective and thereby a waste of public money and time; and unjustified defeatism in the face of apparent operational or ethical problems. Rather than thinking of EBM as a biomedical orthodoxy whose applications to social policy, education, the criminal justice system, etc, should be resisted, we believe that the thoughtful extension of evidence-based principles to all these realms of public policy is important for all those who wish to improve human well being.

References