Article Text

Download PDFPDF

Abstracts
Free

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Plenary session I

01 ASPIRIN AND FOLATE FOR THE PREVENTION OF RECURRENT COLORECTAL ADENOMAS—RESULTS FROM A MULTICENTRE FACTORIAL TRIAL

M. J. Grainge1, K. R. Muir1, V. C. Shepherd1, N. C. Armitage2, R. F. A. Logan on behalf of the UKCAP Trial Group1.1Division of Epidemiology and Public Health; 2Division of Surgery, University of Nottingham, University Hospital, Nottingham, UK

Background: Many observational studies have found regular aspirin use to be associated with a reduced risk of colorectal (CR) cancer, and two randomised trials have shown that aspirin may reduce the risk of recurrent CR adenomas, although results from these trials were not wholly consistent. Dietary folate intake has also been found to be associated with a reduced risk of CR neoplasms in prospective and case–control studies.

Objective: To test whether aspirin use with or without folate supplementation can prevent the recurrence of CR adenomas by undertaking a large factorial trial.

Design: Double blind, placebo controlled randomised trial, using a 2×2 factorial design.

Setting: 10 centres, nine in the UK and one in Denmark.

Participants: Patients who had one or more adenomas (⩾0.5 cm) removed in the 6 months before enrolment. All participants were followed up at intervals of 4 months to assess compliance, with a second colonoscopy arranged for 3 years after trial entry.

Interventions: Aspirin (300 mg/day enteric coated) v placebo and folate supplements (500 μg/day) v placebo

Primary outcome measure: A histologically confirmed CR adenoma or cancer either at the end examination or during the course of the trial.

Results: 945 patients were recruited to the study, of whom 853 underwent a second colonoscopy and were included in an intention to treat analysis. Full compliance with trial medication was reported by 700 patients. In all, 99 (22.8%) of 434 patients receiving aspirin had a recurrent adenoma, compared with 121 (28.9%) of 419 receiving placebo (relative risk (RR) = 0.79 (95% confidence interval, 0.63 to 0.99)). Folate supplementation was found to have no effect on adenoma recurrence (RR = 1.07 (0.85 to 1.34)); 104 advanced CR adenomas were observed (on the basis of villous/tubulovillous features, size ⩾1 cm or evidence of severe dysplasia or cancer); 41 (9.4%) of these were in the aspirin group and 63 (15.0%) in the placebo group (RR = 0.63 (0.43 to 0.91)). Folate use was not associated with the development of advanced lesions, nor was there any evidence of an interaction between aspirin and folate use in terms of either outcome measure.

Conclusions: Aspirin (300 mg/day) but not folate (500 μg/day) use was found to reduce the risk of CR adenoma recurrence, with a suggestion that aspirin could have an important role in preventing the recurrence of advanced lesions.

02 THE EFFECT OF SOCIOECONOMIC DEPRIVATION AND OTHER PROGNOSTIC FACTORS ON THE TREATMENT OF AND SURVIVAL FROM BREAST CANCER

A. Downing1,2, M. S. Gilthorpe1, J. Stefoski Mikeljevic2, K. Prakash1, D. Forman1,3.1Centre for Epidemiology & Biostatistics, University of Leeds; 2CRUK Clinical Centre, St James’ Hospital, Leeds; 3Northern and Yorkshire Cancer Registry and Information Service, Arthington House, Cookridge Hospital, Leeds

Objective: To investigate the association between socioeconomic status (SES) and (a) survival and (b) treatment pattern in a large sample of female breast cancer patients diagnosed between 1998 and 2000.

Data and methods: Women diagnosed with breast cancer between 1 January 1998 and 31 December 2000 were identified from the Northern and Yorkshire Cancer Registry database (n = 12 808). SES was defined using the Townsend deprivation index and obtained using patients’ postcodes. Factors such as age at diagnosis, tumour stage, type of treatment, and the time taken to receive a first hospital appointment or start treatment were examined in relation to SES quartiles. Cox proportional hazards analysis was used to estimate the association between SES and five-year survival, accounting for confounding factors. Logistic regression analysis was used to estimate the association between SES and type of treatment, accounting for confounding factors.

Results: Women living in the most affluent quartile were diagnosed at an older age than those living in the most deprived quartile (36% v 45%) and were diagnosed more frequently with early stage (I or II) disease than those living in the most deprived quartile (82% v 75%). SES was significantly associated with decreasing survival (HR 1.11; 95% CI 1.08 to 1.13 per quartile change in Townsend score). Separate analyses revealed that this increased risk was mainly apparent in patients undergoing breast conserving surgery (BCS) (HR 1.14; 95% CI 1.08 to 1.20 per quartile change in SES) compared with those having a mastectomy (HR 1.04; 95% CI 1.00 to 1.08 per quartile change) or receiving no surgery (HR 1.04; 95% CI 1.00 to 1.08 per quartile change). Increasing deprivation was also associated with significantly decreased odds of undergoing BCS (OR 0.86; 95% CI 0.84 to 0.89 per quartile change in SES) and receiving RT (OR 0.90; 95% CI 0.88 to 0.93 per quartile change), although there was no significant difference in the odds of having a mastectomy in relation to SES (OR 1.02; 95% CI 0.99 to 1.05).

Conclusions: Women living in the most deprived areas were more likely to be diagnosed with breast cancer at an older age and have more advanced disease, and were less likely to be treated by BCS and receive RT. Increasing deprivation was significantly associated with a worse prognosis. However, this was seen mainly in patients undergoing BCS. This result requires further investigation.

03 BREAST CANCER IN THE FAMILY: CHILDREN’S PERCEPTIONS OF THEIR MOTHER’S CANCER AND ITS INITIAL TREATMENT—A QUALITATIVE STUDY

S. Ziebland1, G. Forrest2, C. Plumb2, A. Stein2.1Department of Primary Health Care, University of Oxford, Old Road Campus, Headington, Oxford; 2Section of Child and Adolescent Psychiatry, University of Oxford, Park Hospital for Children, Old Road, Headington, Oxford, UK

Objective: To explore how children of mothers newly diagnosed with breast cancer perceive their mother’s illness and its initial treatment, and to contrast their accounts with their mothers’ perceptions of their knowledge.

Design: Qualitative interview study with thematic analysis

Setting: Home based, audio tape recorded interviews with mothers, fathers, and children in Oxfordshire, UK.

Participants: 37 mothers with early breast cancer and 31 of their children aged between 6 and 18 years. The children were interviewed by a child psychiatrist, with consent from the parents and assent from the children.

Results: There was existing awareness of cancer as a life threatening illness even amongst most of the youngest children interviewed. Children from the age of 7 years were much more aware of cancer as a life threatening illness than their parents realised. The main sources of this information were TV soap operas, health campaigns, and celebrities as well as direct experiences of relatives or friends parents with cancer. Children described specific aspects of their mother’s treatment as especially stressful (seeing her immediately postoperatively, chemotherapy, and hair loss). Children often suspected something was wrong even before they were told the diagnosis. Parents relied on their children asking questions rather than providing information. Parents also sometimes misunderstood their children’s reactions, and underestimated the emotional impact or did not recognise the children’s need for more preparation and age appropriate information about the illness and treatment.

Conclusions: As part of their care, parents newly diagnosed with a life threatening illness need to be supported to think about how they will talk to their children. GPs and hospital specialists, as well as nurses, are well placed to be able to help with these issues, and if necessary to be involved in discussions with the children. The provision of appropriate information, including recommended websites, should be part of this care. More information specifically designed for young children is needed.

Parallel session A

Lifecourse I

04 BIRTH WEIGHT AND ADULT LIVER DAMAGE IN BRITISH WOMEN’S HEART AND HEALTH STUDY

A. Fraser1, S. Ebrahim2, G. Davey Smith1, D. A. Lawlor1.1Department of Social Medicine, University of Bristol, Bristol; 2Department of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, UK

Background: According to the fetal programming hypothesis, fetal undernutrition in mid- and late gestation results in maintaining the brain at the expense of the growth of the trunk, including the liver. This may result in permanent alterations to liver function. Support for this hypothesis comes from both animal and human studies.

Objectives: To examine whether there are associations between birth weight, a proxy indicator for intrauterine nutrition, and liver damage in adulthood.

Design: Cross sectional study.

Participants: 2122 British women aged 60 to 79 with self reports of birth weight who were randomly sampled from 23 British towns. Main outcome measures were levels of serum aminotransferases (ALT and AST), γ-glutamyltransferase (GGT), alkaline phosphatase (ALP), albumin, and total bilirubin.

Results: Age adjusted levels of both ALT and GGT decreased linearly across increasing thirds of the birth weight distribution. ALP levels were higher in women of the lowest third compared with other women. No evidence was found for associations of birth weight with AST, total bilirubin, or albumin. After adjustment for age, childhood and adulthood social class, physical activity, smoking, alcohol consumption, and adult body mass index, an increase of 1 SD of birth weight (691g) was associated (separately) with a 2% decrease (95% confidence interval, 0% to 4%, p = 0.017) in the geometric mean of ALT, a 4% decrease in GGT (1% to 6%, p = 0.006), and a 2% decrease in ALP (0% to 3%, p = 0.001). When the analyses were repeated excluding women with diabetes, cardiovascular disease, and liver cancer, the results were not substantially altered.

Conclusions: Results support the hypothesis that intrauterine nutritional status and development are aetiological factors in the onset of adult liver damage, and that these associations are not explained by confounding from social class and lifestyle as adjusting for a range of potential confounding factors did not alter the effect estimates at all. Given the robust evidence for inverse associations of birth size with cardiovascular disease and diabetes and the evidence of the association of markers of liver damage and function with cardiovascular disease and diabetes, it is possible that liver function mediates some of the associations between intrauterine factors and cardiovascular disease and diabetes.

05 THE ASSOCIATION OF BODY MASS INDEX MEASURED IN CHILDHOOD, ADOLESCENCE, AND YOUNG ADULTHOOD WITH CORONARY HEART DISEASE AND STROKE RISK: FINDINGS FROM FOUR HISTORICAL COHORT STUDIES

D. A. Lawlor1, R. M. Martin1, D. Gunnell1, D. A. Leon2, B. Galobardes1, S. Ebrahim2, J. Sandhu1, Y. Ben-Shlomo1, P. McCarron3, G. Davey Smith1.1University of Bristol; 2London School of Hygiene and Tropical Medicine; 3Queens University Belfast, UK

Background: There is concern that the childhood epidemic of obesity will result in a change in the downward population trends in cardiovascular disease that have been seen in most developed countries for over three decades. There is, however, little epidemiological evidence that being overweight or obese in childhood is associated with future cardiovascular disease risk.

Objective: To assess the association of early life body mass index (BMI) with coronary heart disease (CHD) and stroke risk.

Design: Associations were assessed in four historical cohort studies in which height and weight had been determined using standard procedures, and in which participants have been traced and linked to national mortality data (proportion of traced participants between 83% and 97% for each cohort). Participants in the four cohorts were born between 1922 and 1937 (Boyd Orr cohort), 1927 and 1956 (Christ’s Hospital School cohort), 1928 and 1950 (Glasgow Alumni cohort), and 1950 and 1956 (Aberdeen Children of the 1950s cohort). They were aged 2–15, 9–19, 16–22, and 4–6 years, respectively, at the time of assessment of their height and weight. The prevalence of children in manual social classes (based on fathers’ occupation) in each cohort was 52%, 5%, 6%, and 36%, respectively.

Results: Participants in all four cohorts had similar mean BMI to those reported for contemporary children and young adults, but fewer were overweight or obese. BMI was not associated with future CHD or stroke risk in any individual cohort. The pooled (all four cohorts) adjusted hazard ratio (HR) per standard deviation of BMI measured in early life was 1.05 (95% confidence interval, 0.95 to 1.16) for CHD and 1.01 (0.91 to 1.13) for stroke. The pooled HR for CHD comparing those who were overweight or obese for their age with all other participants was 1.08 (0.85 to 1.38), with no strong evidence of an association between overweight/obesity and stroke risk (HR = 1.11 (0.75 to 1.64)). The effects of BMI or overweight/obese did not vary by individual cohort or by age at which weight and height were assessed.

Conclusions: These results do not provide strong evidence that being overweight/obese in childhood is associated with future cardiovascular disease risk. The lack of heterogeneity in effect between the cohorts—despite differences in birth years, age at weight and height assessment, and socioeconomic background—suggests that these results may be widely generalisable. With respect to public health interventions to reduce levels of cardiovascular disease a focus on adult, rather than childhood, overweight and obesity may be most important.

06 THE IMPLICATIONS FOR LIFE COURSE RESEARCH OF ATTRITION AND BIAS: EVIDENCE FROM THREE COHORTS IN THE WEST OF SCOTLAND TWENTY-07 STUDY

H. Tunstall, M. Benzeval.MRC Social and Public Health Sciences Unit, University of Glasgow, Glasgow, UK

Background: Lifecourse epidemiology plays a significant role in unravelling the social and biological processes that operate over time to influence health. Such research requires longitudinal studies, which inevitably suffer from attrition and possibly selection bias.

Objective: To examine patterns of attrition and bias at different stages of the life course, across different modes of interview and types of non-response, by a range of socioeconomic, demographic, and health factors and to consider the implications of the findings for life course epidemiology.

Design: The Twenty-07 study has followed three cohorts of people at different life stages (originally aged 15, 35, and 55 years) for nearly 20 years by both face-to-face interviews and postal surveys. Participants and non-participants in each of the four waves of data collection were compared for sex, social class, housing tenure, car ownership, long term limiting illness, and self assessed health by age cohort, cause of non-response, and survey type.

Setting: Clydeside County Conurbation (CCC), Scotland, UK.

Participants: 4510 residents of the CCC area.

Main outcome measures: Sociodemographic and survey based factors associated with survey participation and non-participation.

Results: Relative to the baseline the proportion of survey participants retained in wave 4 in the 1970s, 1950s, and 1930s birth cohorts were 56%, 68%, and 54%, respectively. At wave 4 “moving out of the area” was a significant cause of non-response among the youngest cohort, accounting for 14% of the sample, while death accounted for 23% of the oldest sample. The use of a postal questionnaire for part of the sample in wave 3 of the study was associated with significant falls in participation among the youngest and middle cohort. Non-response as a result of death was greater among males than females at all ages but other reasons for non-response were greater among males in the youngest cohort and females in the oldest cohort. The proportion of responders who were home owners, car owners, and from non-manual social class households at wave 1 increased across most waves in each age cohort, so that differences between survey participants and non-participants in these variables were statistically significant at wave 4. Differences in limiting longstanding illness and self rated health between participants and non-participants were most marked in the oldest cohort.

Conclusions: The characteristics of longitudinal attrition vary between age groups, with important implications for analysis of lifecourse epidemiology.

07 DOES AGE AT EXPOSURE INFLUENCE THE DEVELOPMENT OF CARDIOVASCULAR DISEASE FOLLOWING THE 1944–1945 CHANNEL ISLANDS’ SIEGE?

R. F. Head1, M. S. Gilthorpe2, A. Huntington3, G. T. H. Ellison1.1St George’s University of London, London, UK; 2Biostatistics Unit, University of Leeds, Leeds; 3States of Guernsey Board of Health, Guernsey, UK

Background: The 1940–1945 German occupation of the Channel Islands culminated in a 12-month siege, during which there were severe shortages of food, fuel, and other basic necessities. Like the Leningrad siege and the “Dutch hunger winter”, the Channel Islands’ siege offers a natural experiment for examining the impact of undernutrition on subsequent health, by comparing Islanders resident during the occupation with those who left the island before the occupation began. The analyses focus on a comparison of Channel Islanders who were exposed or not exposed to the siege during childhood and adolescence. The aim was to establish whether exposure to the siege affected the risk of cardiovascular disease (CVD) in later life, and whether this varied by birth weight or age at exposure.

Methods: The cohort was based on 1673 live births delivered by a Guernsey midwife between 1923 and 1937. Birth records were matched with wartime registers to identify cohort members who remained on Guernsey during the war, and those who were evacuated to the mainland or had already moved elsewhere. The hospital records of cohort members registered at the island’s only hospital (n = 873) were used to identify those who had received care for myocardial infarction, stroke, or unstable angina between 1997 and 2005 (n = 70). Logistic regression analyses were used to establish whether exposure to the siege, birth weight, and age were associated with the development of CVD.

Results: Compared with those who were evacuated before the occupation began and those resident elsewhere, cohort members who remained on Guernsey during the occupation had more than twice the odds of experiencing CVD in later life (odds ratio (OR) = 2.05 (95% confidence interval, 1.24 to 3.40)). This association remained statistically significant after adjusting for potential confounders (paternal class or urban/rural residency). Birth weight had no impact on subsequent CVD (OR per 1 lb = 1.04 (0.79 to 1.37 after adjustment for sex and gestational age)), and there was no interaction between birth weight and exposure to the siege. However, the association between exposure to the siege and CVD was no longer statistically significant after adjustment for age at exposure (OR = 1.53 (0.89 to 2.63)) because there was a differential impact on cohort members exposed during childhood (aged 3 to 10 in 1940, OR = 2.59 (1.00 to 6.70)) and adolescence (aged 11 to 17 in 1940, OR = 1.42 (0.77 to 2.62)).

Conclusions: These findings support the view that conditions in early life can influence subsequent cardiovascular health, and that some periods of development are more sensitive than others.

08 THE ASSOCIATION BETWEEN LUNG FUNCTION, SMOKING, AND MORTALITY IN THE WEST OF SCOTLAND TWENTY-07 STUDY: WHAT DOES IT CONTRIBUTE TO OUR UNDERSTANDING OF THE SOCIAL PATTERNING OF HEALTH OVER THE LIFE COURSE?

M. Benzeval1, G. Watt2, S. Macintyre1.1MRC Social and Public Health Sciences Unit, Glasgow; 2Department of General Practice, University of Glasgow, Glasgow, UK

Background: Studies have found an association between impaired lung function and all-cause and cause-specific mortality. This paper aims to add to those reports by comparing the association in two different age cohorts and considering its role in the social patterning of health over the lifecourse.

Objective: To investigate the relation between the socioeconomic status, smoking, lung function, and subsequent mortality in two age cohorts, controlling for a range of confounders.

Design and setting: Prospective study of 3036 men and women, living in Clydeside and born in the 1950s and 1930s, aged 35 and 55 at baseline in 1987/8, interviewed at five year intervals, and flagged for mortality records with NHSCR. Preliminary analyses have been undertaken on deaths recorded until 2005 (546 respondents). These analyses will be repeated with further death records for the presentation.

Results: At baseline there was a significant linear trend between head of household social class in lung function (FEV1 standardised by height2), with those in manual classes having poorer lung function than those in non-manual for both men and women in each of the two age cohorts. The association was stronger for women and in the older cohort. Cox’s proportional hazards regression model was used to assess the association between baseline FEV1 standardised by height and all-cause mortality, controlling for sex, cohort, and baseline social class. People in manual households were 1.5 times more likely to die by 2005 than those in non-manual households, while people whose baseline lung function was in the bottom 20% of the distribution were 2.2 times more likely to die than those in the top quintile. For the 1930s cohort the effect of lung function on mortality was stronger than that of social class, while for the 1950s cohort class appeared to be a stronger predictor of premature death. Adding baseline smoking status to the model for both cohorts slightly reduced the hazard ratios for FEV1 and social class but did not change their significance. Respondents who were current smokers at baseline were 2.5 times more likely to die by 2005, and ex-smokers 1.7 times more likely to die than never smokers.

Conclusions: Lung function, social class, and smoking were all associated with subsequent mortality, but the associations varied by cohort. Further analyses will be presented that examine the sensitivity of the findings to alternative ways of measuring and standardising lung function and other measures of socioeconomic status.

Qualitative methods

09 SHARED CLINICAL DECISION MAKING IN RHEUMATOLOGY: WHAT DO THE PATIENTS THINK?

A. A. M. Nicol1, R. D. Smith1, J. R. Smith1, K. S. Mills2, M. Adams1, S. Reynolds1.1School of Medicine, Health Policy and Practice, University of East Anglia, Norwich; 2Department of Rheumatology, Norfolk and Norwich University Hospital, Norwich, UK

Background: Anti-TNF treatment is a relatively new treatment for severe rheumatoid arthritis (RA) which has been found to be highly effective in improving symptoms and reducing disease activity in patients who have not responded to standard drug treatments. The different kinds of anti-TNF treatment currently available are considered medically equivalent and differ only in route of administration. Therefore, RA patients at the Norfolk and Norwich University Hospital are given a choice of which anti-TNF treatment they want to start.

Objectives: To examine the experiences of, and preferences towards, shared decision making of RA patients starting anti-TNF treatment.

Design: A qualitative study with semistructured interviews within the context of a large observational study of the clinical, economic, and psychosocial impact of anti-TNF treatment in clinical practice.

Participants: 21 patients with RA starting on anti-TNF treatment.

Methods: During one or more medical consultations, RA patients chose which of the three available anti-TNF treatments they wanted to start. Of the 61 consenting patients, 44 chose one of the two self administered injections available, and 17 chose the hospital administered infusion. Following their choice of treatment, the patients took part in semistructured interviews about their choice of drug and preferences towards involvement in clinical decision making. A purposive subsample of 21 interviews was then selected on the dimensions of drug choice, sex, and age for a more in-depth qualitative analysis.

Results: The transcribed interviews are being analysed using interpretative phenomenological analysis, which is a qualitative approach within psychology aiming to explore in detail how people view their experiences. It is expected that the analysis will give further insight into the role of shared decision making in clinical practice, as well as information on what factors influence patients’ choices of treatment. Preliminary analysis indicates first, that patients prefer to be advised rather than have a free choice of treatment; and second, that an important factor influencing choice is the ease of incorporating the treatment into existing daily routines.

Conclusions: Knowing what factors patients regard as important in treatment decision making will help health care professionals to engage more successfully in shared clinical decision making.

10 USING A LIFEGRID FOR QUALITATIVE INTERVIEWING IN HEALTH

J. C. Richardson, B. N. Ong, J. Sim, M. Corbett.Primary Care Sciences Research Centre, Keele University, Keele, Staffordshire, UK

Introduction: A lifegrid is a form of solicited account, providing one way to explore the experience of illness within the different realms of a person’s life across the lifecourse. Previously used primarily in quantitative research, in practical terms a lifegrid is a chart containing: (1) rows showing the years in a participant’s life, (2) significant external events by year, (3) columns showing different areas of a participant’s life. The chart is completed jointly by the participant and the interviewer during the course of an interview.

Methods: The data presented in this paper are part of a project exploring the lives of people living with chronic widespread pain, using a range of qualitative methods, including lifegrid interviews, follow up interviews, diaries, and diary interviews. The paper also draws on two other projects which used lifegrids, and focuses on how the lifegrid was used in these contexts in order to evaluate its utility for in-depth interviewing.

Results: The lifegrid enables participants to structure the story of their life and their illness. It facilitates recall in the area of interest, although participants may use fixed events within their own lives, rather than external events, to reference and locate other life events. From a participant’s point of view, the mutual reconstruction of the lifecourse offers some control over the structure and pace of the interview and over when potentially sensitive issues are introduced. From a researcher’s point of view, the interviewer is able to see how life events are clustered, assisting in the framing of further questions and acting as a discussion point for subsequent interviews.

Conclusions: The lifegrid is one of a number of approaches to in-depth interviewing about experiences of health and illness, and as such has advantages and disadvantages. It may be more useful in some contexts than others and needs adapting to suit the participants who will use it.

11 EVALUATION OF DIABETES SCREENING PILOTS IN PRIMARY CARE IN ENGLAND

E. Goyder1, J. Carlisle1, A. Lacey1, J. Peters1, J. Lawton2, S. Wild2, C. Fischbacher2.1ScHARR, University of Sheffield, Sheffield; 2University of Edinburgh, Edinburgh, UK

Background: Between September 2003 and September 2005, 24 general practices in eight different English regions were funded by the Department of Health to pilot screening for diabetes. The practices were volunteers from eight teaching Primary Care Trusts (PCTs), selected for their high levels of socioeconomic deprivation and varied ethnic minority populations. Recommended criteria for screening were age over 40 and body mass index over 25 kg/m2. The threshold for diagnostic testing was a capillary blood glucose of 6.0 mmol/l or above.

Aims: To establish how screening for diabetes in deprived high risk populations could be implemented in general practice and to explore the issues that influenced feasibility and acceptability.

Methods: Practices provided information on the number of patients invited for screening, numbers screened, and numbers of cases of diabetes detected. Twelve practices used paper forms to record details of screening activity, and 12 used computer based clinical information systems. Interviews were conducted with facilitators in all PCTs and case studies—involving in-depth interviews with practice staff and patients—were conducted in five practices. Twenty practices provided patient level data on screened individuals.

Results: Approximately 41 400 individuals were invited for screening, 25 356 were screened, and 358 new cases of diabetes detected. In 19 practices, health care assistants without previous experience were trained to provide screening and, by the conclusion of screening, were considered by primary care teams to be the most appropriate staff for this activity. An oral glucose tolerance test (a fasting test and a test two hours post-glucose drink) was used as a diagnostic test by 13 practices; other practices used only a fasting test. A major barrier to screening and to evaluation of screening was the inadequacy of clinical information systems with respect to easily identifying “high risk” individuals for screening or identifying the results of screening tests or subsequent diagnostic tests. For one third of the individuals for whom a positive screening test result was available, no diagnostic test result was provided in audit data, whether data were derived from paper forms or from searches of clinical systems.

Conclusions: These findings have major implications for the implementation of diabetes screening nationally and, more generally, for current plans to implement cardiovascular risk assessment in primary care. Despite great enthusiasm for training staff to provide risk assessment, screening, and health promotion, current clinical data collection proved inadequate to support this or allow adequate audit of the subsequent activity.

12 DECISION MAKING ABOUT MODE OF DELIVERY IN PREGNANT WOMEN WHO HAVE PREVIOUSLY HAD A CAESAREAN SECTION: A QUALITATIVE STUDY

J. Bell, M. Porter, M. Moffat, S. Lawton, V. Hundley, P. Danielian, S. Bhattacharya.IMMPACT, University of Aberdeen, 2nd Floor, Foresterhill Lea House, Foresterhill Campus, Westburn Road, Aberdeen, UK

Objective: To explore prospectively women’s decision making regarding mode of delivery after a previous caesarean section (CS).

Design: Qualitative study using diaries, observations, and tape recorded interviews. Data were analysed thematically.

Setting: The antenatal unit in a large tertiary hospital in Scotland and the participants’ homes.

Participants: 26 women who had previously had a CS for a non-recurrent cause.

Results: The data on decision making were analysed from both a temporal and a cross sectional perspective. Women were influenced by their own previous experiences and expectations, and the final decision on mode of delivery often developed during the course of the pregnancy. Initially undecided or uncertain of their ability to deliver normally, many women felt they were being encouraged to have a trial of labour by medical and nursing staff. During the course of the pregnancy, most acknowledged that any decision was provisional and might change because of medical and social circumstances. Despite a universal desire to be involved in the process, many women did not participate actively and were uncomfortable with the responsibility of decision making. Feelings about the amount and quality of information received regarding delivery options varied greatly, many wishing for information to be tailored to their individual clinical circumstances and needs. Other factors that influenced decision making were: the wish for a “normal” delivery, practical issues, and the opinions of other key people involved in the women’s lives. In contrast to the impression created in the media, there was no evidence of clear preferences or strong demands for elective CS.

Conclusions: Women who have had a previous CS do not have usually have firm ideas about delivery. They look for targeted information and guidance from medical personnel based on their individual circumstances, and some are unhappy with the added responsibility of deciding how to deliver in the current pregnancy.

13 AN ETHNOGRAPHIC APPROACH TO EVALUATION OF USER INVOLVEMENT IN STROKE SERVICE DEVELOPMENT

N. Fudge, C. D. A. Wolfe, C. McKevitt.Division of Health and Social Care Research, King’s College London, London, UK

Objective: UK health service providers are obliged to involve users in the development and delivery of health services. However, there is scant evidence of the impact and influence of service user involvement on service outcomes. We evaluated the involvement of people with stroke in a programme of stroke service improvement undertaken in southeast London. The evaluation aimed to: identify the barriers to and facilitators of involvement; identify how people with stroke influence the development and delivery of stroke services; and understand what constitutes involvement from the perspective of service users and health care professionals.

Methods: We used an ethnographic approach including participant observation, content analysis of programme literature (meeting minutes, agendas, newsletters), and in-depth interviews (n = 20) with programme participants and those who declined to be involved.

Participants: People with stroke and their relatives living in south east London who volunteered to take part in the programme, and NHS managers, clinicians, and voluntary sector organisations working within the programme for the local NHS trusts.

Results: People with stroke and their families were recruited to the programme through a community stroke register, clinical services, and community organisations; 80 people responded to invitations to participate. At 9 months, 41 people were active in the programme. People with stroke and their families have been involved in overseeing the programme as members of the management team, improving stroke information provision, training stroke professionals, and peer support. Factors that facilitated involvement were: encouragement from trusted contacts; introductory events; feeling listened to; and the sociable aspect of being involved. Barriers to involvement were transport, inflexibility of health service organisation, illness, caring commitments, and communication. Service users’ motivations for taking part in the project included: improving oneself after the stroke; to help other people with stroke; and to meet other people. Only a few users saw the purpose of their involvement in the programme as a means to influence stroke service development. Observations on professionals in the process of involving service users showed some obstacles which they faced within their organisation: inflexibility of resources; the priority accorded user involvement in practice; demands of combining user involvement within target driven projects; and attitudes of other professionals working within the programme to user involvement.

Conclusions: There is a policy drive to integrate user involvement into the development of healthcare and services. This evaluation highlights economic, social, and cultural factors that need to be addressed before this can be achieved.

Primary care

14 HOW ADEQUATE ARE OVERALL MEASURES OF PATIENT SATISFACTION? EVIDENCE FROM PATIENTS EXPERIENCING NEW MODELS OF OUT-OF-HOURS CARE

A. Burgess1, V. Lattimer1, H. Smith3, K. Gerard1, J. Turnbull1, H. Surridge1, J. Lathlean1, S. George2, M. Mullee2.1School of Nursing and Midwifery, University of Southampton; 2Public Health Sciences and Medical Statistics Group, University of Southampton; 3Brighton and Sussex Medical School, University of Brighton, UK

Objective: To compare the satisfaction and experience of patients contacting a new integrated model of out-of-hours service with that of patients contacting out-of-hours services providing a standard model of care.

Design: A two way comparison of patient satisfaction and experience in different models of care: (1) between patients contacting GP cooperatives integrated with NHS Direct nurse triage (integrated model of care, “IM”) and GP cooperatives employing their own nurse telephone triage service (nurse-triage model, “NT”); (2) between IM and GP cooperatives providing doctor telephone triage only (doctor-triage model, “DT”) using the validated Short Questionnaire of Out-of-Hours Care (SQOC) embedded within a questionnaire survey.

Setting: Six GP cooperatives in England: two in each of the comparison groups.

Subjects: 3930 patients contacting one of the six out-of-hours services during the study period.

Main outcome measures: (1) Overall SQOC satisfaction score and the proportions of patients who were “very satisfied” with each of seven attributes of the scale; (2) comparison of the standard of care that patients reported receiving with national quality requirements for out-of-hours care; (3) the proportion of patients whose stated expectations about call disposition were subsequently met.

Results: Preliminary results show that SQOC satisfaction scores were high, with no significant difference between the three groups (IM: mean (SD), 81.3 (23.0)%; NT: 81.5 (23.9)%; DT: 83.7 (20.3)%, p = 0.189). However, attribute scores revealed that patients receiving IM were less satisfied with “getting through on the phone” than those receiving DT (p<0.001) or NT (p = 0.022), and were also less satisfied with “waiting to speak to a health professional” than those receiving DT (p = 0.009). Patients receiving IM were less likely make contact in a single telephone call, were more likely to receive a “call-back” and to wait longer at certain points in the pathway. Higher satisfaction was associated with having expectations met (83.6 (21.1)% v 79.5 (24.5)%, p = 0.001). Fewer patients receiving IM had their expectations met (n = 547: 51.6%) than did those receiving NT (n = 464: 65.8%) or DT (n = 463: 70.7%) (χ2 = 47.03, p<0.001).

Conclusions: Overall SQOC scores did not suggest significant differences between the different models of out-of-hours care but closer analysis showed important differences in satisfaction in relation to service attributes and quality standards. Services providing the integrated model had recently undergone substantial change. Such changes are infrequently communicated to patients and lower satisfaction may have been the product of dissonance between patient expectation (based on past experience) and the new arrangements offered by the provider.

15 GENERAL PRACTICE PERFORMANCE IN THE FIRST YEAR OF THE UK’S NEW “PAY FOR PERFORMANCE” SCHEME: GOOD CLINICAL PRACTICE OR GAMING?

T. Doran, C. Fullwood, D. Reeves, H. Gravelle, M. Roland.National Primary Care Research and Development Centre, Williamson Building, University of Manchester, Manchester, UK

Objectives: To determine whether the socioeconomic, demographic, and health characteristics of practice populations, and the characteristics of practices themselves, affect the quality of clinical care achieved under the new Quality and Outcomes Framework (QOF). We also assessed the use of exception reporting by practices and its effect on achievement of the clinical quality targets.

Design: Multiple regression analysis of QOF data for general practices in the first year of the new contract (April 2004 to March 2005), data from the UK census, and data on characteristics of individual family practices.

Participants: 8576 general practices in England.

Main outcome measures: Reported achievement on the clinical QOF indicators, population achievement (the proportion of patients meeting quality indicators for all patients registered with the condition, including those exception reported), and the extent to which practices achieved high scores by excluding patients from target calculations (exception reporting).

Results: Median reported achievement in the first year of the new contract was 88.6%. However, practices exception reported a median 6.0% of patients, and median population achievement was therefore lower, at 82.9%. Marginally higher achievement was associated with practices serving populations with fewer income deprived people, fewer over 65s, and more people with good self rated health. The strongest predictor of achievement was the propensity of practices to exception report patients: a 1% increase in exception reporting was associated with 0.3% higher overall achievement. Higher rates of exception reporting were associated with practices serving populations with more income deprived people, lone parent households, and over 65s. Exception reporting was high in a small number of practices: 1% excluded >15% of patients.

Conclusions: Against a background of substantial quality improvement in recent years, UK family practices attained high levels of achievement in the first year of the new contract. Some socioeconomic and demographic characteristics of practice populations were associated with higher rates of achievement. Practices could increase scores by excluding patients for whom, in their judgement, a quality indicator was not clinically appropriate (exception reporting). Further investigation of the few practices with high exception reporting rates is needed to determine whether exception reporting was used for sound clinical reasons or in order to increase income.

16 PRIMARY CARE QUALITY SCORES AND THEIR RELATION TO HOSPITAL ADMISSIONS

G. Rudge1, A. Downing2, Y. K. Tu2, M. S. Gilthorpe2, R. C. Wilson3, J. Keen4.1Department of Public Health and Epidemiology, University of Birmingham; 2Centre for Epidemiology and Biostatistics, University of Leeds; 3Public Health, South Birmingham PCT; 4Centre for Health and Social Care, University of Leeds, Leeds, UK

Background: There are methodological difficulties in determining the strength of association between how (or if) healthcare processes are carried out and patient outcomes. Furthermore, it is difficult to separate these effects from the broader matrix of variables that also affect health, for example socioeconomic deprivation. The aim of this study was to explore this issue with reference to primary care processes as measured by the new Quality and Outcome Framework (QOF) and to assess their relation to hospital admissions. This was undertaken using multilevel modelling to account for the complex data structures involved.

Data and methods: Information on admissions between April 2003 and March 2004 were extracted from the Hospital Episodes Statistics database for two neighbouring primary care trusts (PCTs) for a range of disease groupings, along with patient sociodemographics. The Index of Multiple Deprivation 2004 score was assigned to the patients’ super output area (SOA) of residence. Through the NHS number, the admission data were linked to practice QOF scores for the clinical, organisational, and additional services domains. Cross classified multilevel logistic regression models were explored separately for elective and emergency admissions.

Results: Females had substantially fewer admissions for coronary heart disease (CHD) and cancer. Age was positively associated with all admissions except emergency asthma admissions. Increasing deprivation indicated increasing likelihood of admission for all conditions. The additional services domain was modestly associated with increased admissions except for asthma emergencies. The clinical domain was associated with reduced emergency admissions for cancer, stroke, CHD, and chronic obstructive pulmonary disease. The organisational domain was not generally associated with patient care outcomes. There was less unexplained outcome variation among GP practices than across SOAs. Even where the effect of deprivation was significant, there often remained a significant proportion of unexplained variation across SOAs.

Conclusions: While there were several associations between QOF scores and patient health outcome measures, after adjusting for socioeconomic deprivation, these were not consistent and it generally appeared that deprivation was the more important covariate. These results suggest that there are potentially unmodelled factors pertaining to health and healthcare operating at the area level. While ensuring good quality and responsive primary care for all is an imperative, other policies such as area based initiatives, economic development, and environmental renewal might offer better long term solutions to improving population health.

17 IMPACT OF CONTRACTUAL FINANCIAL INCENTIVES ON THE ASCERTAINMENT AND MANAGEMENT OF SMOKING IN PRIMARY CARE

T. Coleman1, S. Lewis2, R. Hubbard2, C. Smith2.University of Nottingham, Divisions of 1Primary Care; 2Epidemiology and Public Health, Nottingham, UK

Background: The management of smoking cessation in primary care is currently suboptimal. Between 1993 and 1996 general practitioners earned payments for recording patients’ cardiovascular risk factors (including smoking). In 2004, a new contract for UK general practitioners included payments for recording smoking status and GPs’ brief smoking cessation advice, but none for prescribing nicotine addiction treatments (nicotine replacement therapy (NRT) or bupropion).

Objectives: To determine the impact of the 1993–96 payments and 2004 GP contract on delivery of effective smoking cessation interventions in primary care.

Design: Longitudinal analysis of data from The Health Improvement Network (THIN) database, collected between 1990 and 2005.

Setting: THIN is a database which contains routine general practice electronic medical records data.

Subjects: Primary care medical patients contributing to THIN.

Interventions: Observational study investigating trends in routine primary care data.

Main outcome measures: For all THIN patients, trends in recording of smoking status. For those recorded as smokers, trends in receipt of (1) GPs’ brief smoking cessation advice and (2) prescriptions for nicotine addiction treatments.

Results: Recording of smoking status increased temporarily around 1993–94 and then rose gradually from the year 2000. This rise was more marked from 2003, with an 88% increase between the first quarters of 2003 and 2004 when general practices prepared for the new GP contract. The latter quarter was just before the introduction of the GP contract, and higher rates of recording smoking status were sustained for the subsequent year. In smokers, there was a broadly similar pattern for the proportion recorded as having received brief cessation advice. However, while there was a sharp increase in prescriptions for nicotine addiction treatments after their release on prescription in the UK (2000/2001), no comparable acceleration in this trend from 2003 was apparent.

Conclusions: Two general practitioner payment systems targeted at specific aspects of clinical care including smoking cessation management increased the level of recording smoking status in primary care and the amount of smoking cessation advice that GPs recorded giving. Improvements attributable to the first payment scheme disappeared after this was withdrawn and no impact on prescribing nicotine addiction treatments attributable to the 2004 GP contract was documented.

18 DOES DISTANCE MATTER? GEOGRAPHICAL VARIATION IN THE USE OF GENERAL PRACTICE OUT-OF-HOURS SERVICES

J. Turnbull1, V. Lattimer1, C. Pope1, D. Martin2.1Health Services Research Group, School of Nursing and Midwifery, University of Southampton; 2Department of Geography, University of Southampton, Southampton, UK

Background: Out-of-hours services provide urgent medical care when the GP surgery is closed. An inverse relation between distance from health care services and use has been noted in a variety of health care settings and in different countries. Provision has become increasingly centralised, leading to concerns that rural populations may experience poorer access.

Objective: To examine the effect of distance and rurality on service use and initial triage decision.

Design: Data analysis of all calls made to a general practice out-of-hours cooperative in June and December 2003.

Setting: A countywide cooperative in the south west of England, serving almost one million patients (82% of the county population), covering urban, rural, and remote areas. Approximately 20% of the population live in rural areas (village, hamlets, or isolated dwellings); of these areas 5% were defined as “sparse”, using the recent ONS Rural and Urban Area Classification 2004.

Main outcome measures: Patterns of out-of-hours activity including date/time of call, initial triage decision (home visit, primary care centre (PCC) attendance or telephone advice), age/sex specific call rates, straight line distance (from patient’s home to nearest PCC), and area classification of residence and service use.

Results: 34 229 calls were received by the GP cooperative. Call rates varied widely by Primary Care Trust (PCT) area from 140 calls per 1000 patients/year to 216 calls per 1000 patients/year, with significantly higher call volumes from urban PCTs. The proportion of patients seen at a PCC ranges from 25% (in more rural PCTs) to 33% (urban PCT), although an urban–rural variation in advice calls or home visits was not observed. Call rates also varied by ONS Rural and Urban Area Classification 2004 categories, with significantly lower call rates in the village, hamlet, and isolated dwellings area type: 133 (95% confidence interval, 130 to 137) calls per 1000 patients/year v 202 (196 to 208) calls per 1000 patients/year in town and fringe areas, and 210 (207 to 213) calls per 1000 patients/year in urban areas. Further analyses indicate variation in rates with distance to PCC.

Conclusion: There appears to be geographical variation in the use of out-of-hour’s services, suggesting that distance does matter, either because of inequities in service provision or perhaps sociocultural differences in patients’ help seeking behaviour (for example, a rural culture of self reliance).

Child and adolescent health

19 POSITIVE TWO YEAR FOLLOW UP FINDINGS FROM A STOP SMOKING IN SCHOOLS TRIAL (ASSIST), THE LARGE SCALE CLUSTER RANDOMISED TRIAL OF A PEER LED, SCHOOL BASED ANTI-SMOKING INTERVENTION

R. Campbell1, F. Starkey1, J. Holliday2, R. Hughes1, S. Audrey1, N. Parry-Langdon3, M. Bloor4, L. Moore2.1Department of Social Medicine, University of Bristol; 2Cardiff Institute of Society, Health and Ethics, Cardiff University; 3Public Health Improvement Division, Welsh Assembly Government; 4Faculty of Social Sciences, University of Glasgow, Glasgow, UK

Objective: To assess the effectiveness of a peer led, school based anti-smoking intervention (ASSIST).

Design: The intervention was evaluated using a cluster randomised controlled trial design with secondary school as the unit of randomisation. Embedded within the trial were an economic evaluation of the intervention costs, a process evaluation to provide detailed information on how the intervention was delivered and received, and an analysis of social networks to consider whether such a peer group intervention could work with this age group.

Setting: 59 secondary schools in the West of England and south east Wales.

Participants: 10 730 year 8 students (aged 12 to 13) at baseline data collection.

Intervention: A questionnaire was used to identify the most influential year 8 students, who then received two days’ training outside school, followed by four in-school sessions over the subsequent 10 weeks. During this period, students were asked to intervene with their peers in everyday situations to encourage them not to smoke.

Evaluation methods: Students in intervention and control schools completed a baseline smoking questionnaire and provided a saliva sample. Follow up questionnaires were administered immediately post-intervention and at a one year and a two year follow up.

Results: Response rates were high, with over 85% of eligible students providing both self report and cotinine data throughout the trial. All data from the three follow up periods were analysed using three level multilevel modelling implemented in MLwiN. This analysis produced an odds ratio of regular (weekly) smoking in intervention v control schools of 0.77 (95% confidence interval, 0.62 to 0.94) (p = 0.012). This finding cannot be explained by a significant impact upon peer supporters only. Comparing peer supporters in intervention schools with control school students who would have been invited to become peer supporters had they been in an intervention school, the estimate for the interaction term (0.90 (0.69 to 1.19), p = 0.466) suggests that the intervention was only slightly more effective among those who were invited to be peer supporters, and there was no statistical evidence to support a differing effect.

Conclusions: The intervention had a statistically significant impact on adolescent smoking at two year follow up, of a magnitude of public health importance. Due to the very promising results obtained in this trial, the Welsh Assembly Government has decided to “roll out” the ASSIST intervention across Wales from September 2006, and evaluation of this implementation (as a Phase IV study, defined in the MRC Framework for the Evaluation of Complex Public Health Interventions) is currently being planned.

20 IDENTIFYING INFLUENTIAL YOUNG PEOPLE FOR HEALTH PROMOTION PEER EDUCATION: THE CASE STUDY OF A STOP SMOKING IN SCHOOLS TRIAL, ENGLAND AND WALES, 2001–2005

F. Starkey1, J. Holliday2, S. Audrey1, L. Moore2, R. Campbell1.1Department of Social Medicine, University of Bristol; 2Cardiff Institute of Society, Health and Ethics, Cardiff University, Cardiff, UK

Objective: To develop an effective process for identifying influential young people to participate in a peer led smoking prevention intervention.

Design: The peer identification process was the first stage of an intervention that was evaluated via an MRC funded cluster randomised controlled trial (ASSIST).

Setting: 59 secondary schools in the West of England and south east Wales.

Participants: 10,730 Year 8 students (aged 12–13) at baseline data collection. The peer identification process resulted in 876 students being invited to train to be “peer supporters” for the intervention.

Methods: The intervention was based on “diffusion of innovation” theory, which suggests that changes in social and behavioural norms can be effected by influential opinion formers. A “whole year group” approach for identifying influential students to work as peer supporters was developed through extensive piloting work with young people. After two days’ training, these influential young people were asked to have informal conversations over a 10 week period with other year 8 students to encourage them not to smoke.

Results: The whole year group identification process successfully recruited both male and female peer supporters, with a range of backgrounds, aspirations, and experiences. It was feasible to engage and retain such a diverse group of young people as peer supporters. Attrition rates were low, with 87% (816 of 942) of those originally invited to be peer supporters being trained and continuing in the role. One year follow up results indicate that these peer supporters had a positive impact on levels of regular smoking among the trial’s target group of experimenters, occasional smokers, and ex-smokers (OR = 0.75 (95% confidence interval, 0.56 to 0.99)), and preliminary two year follow up data showed little attenuation of this effect.

Conclusions: The ASSIST peer nomination process was developed and refined using young people’s own terms of reference and feedback, rather than adult defined criteria. The diversity of the group of peer supporters identified by the nomination process is likely to have enhanced the intervention’s credibility, a hypothesis borne out by the statistically significant results obtained at one year follow up. Given the increasing popularity of school based peer education as a health promotion approach, yet the scarcity of sensitively developed peer educator recruitment processes in published reports, the experience of ASSIST can help to inform the development of other peer education approaches aiming to have a positive impact on health related behaviours.

21 CLARITY, COMMITMENT, AND COMPATIBILITY: TEACHER PERSPECTIVES ON THE IMPLEMENTATION OF A SCHOOL BASED HEALTH PROMOTION PROGRAMME

S. Audrey1, J. Holliday2, R. Campbell1.1Department of Social Medicine, University of Bristol; 2Cardiff Institute of Society Health and Ethics, School of Social Sciences, Cardiff University, Cardiff, UK

Design: Process evaluation within ASSIST (A Stop Smoking In Schools Trial) to evaluate a peer led, school based smoking intervention in which influential year 8 students were trained by external health promotion trainers to reduce smoking uptake among their peers through informal diffusion of the health promotion message.

Setting: Secondary schools in south east Wales and the West of England.

Participants: Year 8 teachers and their students.

Methods: Self completion questionnaires were used in all intervention schools at each stage of the intervention (peer nomination, peer supporter recruitment, and training). Baseline and post-intervention semistructured interviews were carried out with key staff. All interviews were taped and fully transcribed, and free text responses from self completed questionnaires were reproduced in full on summary sheets to facilitate comparison. Patterns of experiences and opinions were identified, and further analysis led to the development of core themes.

Results: Outcome data at one year follow up indicate that the risk of students who were occasional or experimental smokers at baseline going on to report weekly smoking at one year follow up was 18.3% lower in intervention schools. There was a high level of interest among schools approached to take part in the trial; 59 schools were recruited (10 730 students at baseline) and no school withdrew from the programme. Training and follow up sessions were successfully conducted in all intervention schools; 90% of intervention schools (27 of 30) returned questionnaires concerning teacher perceptions of the peer supporter recruitment, and 83% (25 of 30) returned questionnaires about the training. Researchers experienced no difficulties in interviewing staff at selected intervention schools at baseline and post-intervention. Recruitment and retention rates of peer supporters were high with 85% of those originally nominated (835 of 978) attending the training event and agreeing to continue in the role. When back at school, peer supporter attendance at follow up meetings did not fall below 86% (718 of 835), and 82% (687 of 835) handed in a simple diary of their activities.

Conclusions: Schools were generally willing to support and accommodate the programme, and to cooperate with the process evaluation. Several important factors affected the implementation of the programme: commitment and active support of school staff; clarity of aims and procedures; and compatibility with school values and context. We acknowledge areas of tension but conclude that the ASSIST model offers an effective and practicable approach to school based health promotion.

22 CEREBRAL PALSY AND CONGENITAL ANOMALIES HAVE THE SAME AETIOPATHOGENESIS

P. O. D. Pharoah.Department of Public Health, Muspratt Building, University of Liverpool, Liverpool, UK

Background: In 70% of cases of cerebral palsy (CP), the cerebral impairment occurs prepartum. The proposed pathogenic mechanism is ischaemic damage to the fetal brain associated with monochorionic placentation in a twin conception. CP in a singleton infant is attributable to very early loss of one conceptus, a “vanishing” twin. For most congenital anomalies, the cause is unknown and it is hypothesised that many have the same aetiopathogenesis as CP. If so, children with CP should be at increased risk of many congenital anomalies.

Objective: To compare the prevalence of major congenital anomalies in children with CP with that in all children.

Methods: The Mersey CP register records all cases born since 1966 in the counties of Merseyside and Cheshire. Hospital and community paediatric records were abstracted for coincident congenital anomalies. The International Classification of Diseases, 10th edition (ICD-10) was used to code the anomalies. For comparison, the Office for National Statistics provided data for notified congenital anomalies in 2000–2004 and data from the eight UK congenital anomaly registers contributing to EUROCAT were obtained from the EUROCAT website. Published data from the population based Baltimore–Washington Study of congenital heart disease also were used.

Results: There were five- to 33-fold increased population prevalence ratios for the congenital cardiac anomalies (Q20: Congenital malformations of the cardiac chambers and connections; Q22 and Q23: Congenital malformations of the pulmonary, tricuspid, aortic and mitral valves; Q25 and Q26: Congenital malformations of the great arteries and veins). The population prevalence ratio for congenital anomalies of the eye (Q11–Q14) was 8.8, for cleft lip and palate (Q35–Q37), 4.0, and for congenital malformations of the oesophagus, small and large intestine (Q39, Q41 and Q42), 6.4. Microcephaly (Q02) and hydrocephalus (Q03), diagnosed antenatally or at birth, had population prevalence ratios greater than 50. There were also significant increases in the prevalence of congenital deformities of the hips and feet.

Conclusions: Compared with all births, children with CP have about a 10-fold increase in the prevalence of major congenital anomalies. Anomalies such as congenital dislocation of the hip and talipes are attributable to the muscular laxity of intrauterine CP. Microcephaly and hydrocephalus are secondary to cerebral damage that produces CP. This supports the hypothesis that the damage is caused by acute episodes of feto-fetal transfusion in a monochorionic twin gestation. The congenital cardiac valve, eye, and intestinal anomalies also conform to this pathogenic mechanism.

23 CHILDREN’S USE OF BICYCLE HELMETS: THE EFFECTS OF SOCIOECONOMIC AND ATTITUDINAL FACTORS

I. Lang1, S. Gibbs2.1Epidemiology and Policy Group, Peninsula Medical School, Exeter; 2Exeter Primary Care Trust, Exeter, UK

Objective: To assess the association of socioeconomic and attitudinal factors with children’s reported use of cycle helmets.

Design: Pooled population based repeated cross sectional study.

Setting: National population sample in England.

Participants: Children aged 8 to 12 who participated in the Health Survey for England for each of the years 1997 to 2002 (n = 8525).

Main outcome measures: Self reported wearing of a bicycle helmet most of the time and all of the time. Analyses were conducted using demographic and residential explanatory variables, demographic plus socioeconomic variables (social class, household income), demographic plus attitudinal variables (attitudes to bicycle safety and health behaviour), and all variables.

Results: The majority of children in this age group reported owning a bicycle (89.6% (95% confidence interval, 88.8 to 90.4)). Of those who did, 17.6% (16.0 to 19.3) reported that they always wore a helmet while riding a bike, 32.9% (31.1 to 34.7) that they sometimes wore a helmet, 41.4% (40.9 to 44.7) that they never wore a helmet, and 6.7% (6.1 to 7.4) that they never rode their bike. The strongest attitudinal association with helmet wearing was with feeling safer when wearing a helmet; this was more important than the cost of helmets, even for children from the poorest households. Both social class and household income were significantly associated with wearing a helmet when cycling, with children from lower SES households less likely to report wearing a helmet. When we compared socioeconomic with attitudinal variables we found that attitudinal variables explained more of the variation in reported helmet wearing, even when controlling for socioeconomic variables.

Conclusions: The findings suggest that both socioeconomic status and attitude influence whether children wear bicycle helmets but that attitudinal factors play a greater role. Interventions intended to promote cycle helmet wearing among children are likely to be most effective if they focus on changing children’s attitudes to bicycle safety, and in particular on increasing perceptions that wearing a cycle helmet is a safe thing to do.

Population health

24 THE LIMITING LONG TERM ILLNESS AND GENERAL HEALTH QUESTIONS AT THE 2001 CENSUS: AN ANALYSIS OF SELF REPORTED HEALTH IN RELATION TO MORTALITY IN THE FOUR YEARS FOLLOWING THE NORTHERN IRELAND CENSUS

M. Rosato, D. O’Reilly.Department of Epidemiology and Public Heath, Queens University Belfast, UK

Background and objectives: Two self reported health questions were included in the 2001 UK census: on long term illness limiting daily activity (LLTI) and on general health over the previous 12 months. This effectively builds on the success of the LLTI question of the 1991 census and its subsequent use in the assessment of need and the derivation of resource allocation formulae. However, the inclusion of both questions begs the question of the nature of the information contained in each. In this analysis we examine both measures in relation to mortality: how these relations are modified by inclusion of demographic factors such as age, sex, marital status, and selected socioeconomic indicators, both at individual and area level.

Design: Longitudinal analysis: a four year follow up study of those enumerated in the 2001 census of Northern Ireland.

Subjects: 950 000 people aged between 25 and 74.

Outcome: Mortality between July 2001 and June 2005

Results: In models adjusted for demographic and socioeconomic circumstances, women were more likely than men to report having either fair or poor health (odds ratio (OR) = 1.19 (95% confidence interval, 1.17 to 1.20)), and were less likely than men to report LLTI (OR = 0.97 (0.96 to 0.99)), and both indicators confirm the health inequalities associated with socioeconomic circumstance. There were 17 000 deaths during the follow up period. After adjustment, those reporting LLTI at the census showed a near threefold mortality risk (HR = 2.85 (2.81 to 2.89)) when compared against those reporting no LLTI. Those reporting fair or poor general health had higher all-cause mortality risks (HR = 1.73 (1.56 to 1.90)) and HR = 3.24 (2.94 to 3.58), respectively) when compared against those reporting good general health. An analysis including both measures showed that each made an independent contribution to mortality risk, though there was some evidence of information redundancy. Cause-specific mortality in relation to these indicators will also be discussed.

Conclusions: The LLTI and General Health indicators reflect different dimensions of health, which probably explains the variations in response by sex, and both contribute to the mortality risk.

25 DETERMINANTS OF SELF RATED HEALTH AND MORTALITY IN A RUSSIAN POPULATION SAMPLE

F. Perlman.ECOHOST, London School of Hygiene and Tropical Medicine, Keppel Street, London, UK

Background: Life expectancy in Russia declined dramatically after 1991. Self rated health is often studied as a substitute for mortality in Russia, as data are scarce. Socioeconomic variations in both outcomes have been shown.

Objectives: To measure the association between self rated health and mortality in a Russian population sample, and to compare the associations between socioeconomic and other variables and self rated health and mortality.

Design: The Russia Longitudinal Monitoring Survey is a prospective panel study of households and individuals. Analyses were based on data from seven rounds (1994 to 2001).

Setting: 38 population centres across the Russian Federation sampled after socioeconomic stratification.

Participants: 11 482 adults aged 18 and over, from households of two or more; 782 respondents died, and the mean time in the study was 4.2 years.

Main outcome measures: Self rated health at entry (five point scale, dichotomised into poor/very poor versus average/good/very good); death from any cause, reported by a household member.

Results: Over half the respondents reported “average” health. Poor or very poor self rated health predicted mortality significantly (1.69 (1.36 to 2.10) for men; 1.74 (1.38 to 2.20) for women). Education (three categories) predicted mortality (0.66 (0.59 to 0.74) for men; 0.57 (0.50 to 0.65) for women) more strongly than poor or very poor self rated health (0.77 (0.69 to 0.87) for men; 0.88 (0.81 to 0.97) for women), using Cox proportional hazards analysis and logistic regression, respectively. Self rated health was associated with life satisfaction (five point scale) (1.40 (1.27 to 1.54) for men; 1.46 (1.35 to 1.58) for women) and subjective social status (nine point scale: tertiles) (0.67 (0.58 to 0.77) for men; 0.69 (0.62 to 0.77) for women); however, these variables did not predict mortality. Smoking was associated with mortality (1.92 (1.58 to 2.34) for men; 2.77 (1.68 to 4.56) for women), but not self rated health. Increasingly frequent alcohol intake was associated with better self rated health but higher mortality.

Conclusions: Self rated health was typically “average”, worse than in many Western countries. Important differences in the predictors of self rated health and mortality, despite the association between the two outcomes, suggest that there are influences on subjective health other than disease. Differences in the predictors mean also that caution is required when using self rated health as a substitute for mortality. Associations between self rated health, education, smoking, and mortality were comparable to other studies, supporting the reliability of the mortality data.

26 PREDICTING CORONARY HEART DISEASE DEATHS IN BEIJING IN 2010: POTENTIAL EFFECTS OF RISK FACTOR TRENDS AND POPULATION AGING

Z. Zhechun1, J. Cheng1, J. A. Critchley2, D. Zhao1, J. Liu1, Y. Jia1, W. Wang1, J. Yi-Sun1, S. Capewell3.1Department of Epidemiology, Beijing Institute of Heart, Lung and Blood Vessel Diseases, Beijing, China; 2International Health Research Group, Liverpool School of Tropical Medicine, UK; 3Department of Public Health, University of Liverpool, UK

Background: Coronary heart disease (CHD) mortality rates have halved in most industrialised countries since the 1980s. However, CHD mortality rates are still rising in most developing countries. CHD is projected to be the leading global cause of death and disability by 2020. In China, recent increases in CHD mortality have been dramatic—for instance, rising by approximately 50% in Beijing between 1984 and 1999. Approximately three quarters of this increase was attributable to large increases in total cholesterol, reflecting major changes in dietary patterns. Medical and surgical treatments had only a small impact. It is now crucial for policy making to predict future CHD mortality trends and assess the potential impact of future changes in risk factors and demography.

Objective: To assess the potential impact of changes in risk factors on numbers of CHD deaths in the Beijing population of 13 million between 1999 and 2010, to inform future CHD strategies.

Design: A previously validated model was used to estimate the CHD deaths expected in 2010, (a) if recent risk factor trends continue or (b) if levels of major risk factors are decreased.

Results: In 2010 we would expect about 9500 additional deaths even if there were no increases in age and sex specific CHD mortality rates. This would simply reflect the aging of the population and the increase in population size in Beijing between 1999 and 2010. Continuation of current risk factor trends will result in a 48% increase in CHD deaths by 2010 (almost half being attributable to increases in total cholesterol levels). Even optimistically assuming a 1% annual decrease in risk factors, CHD deaths would still rise by approximately 20% because of population aging.

Conclusions: A substantial increase in CHD deaths in Beijing may be expected by 2010. This will reflect worsening risk factors compounded by demographic trends. Population aging in China will play an important role in the future, irrespective of any improvements in risk factor profiles.

27 DEATH CAP? THE MORTALITY BURDEN OF THE COMMON AGRICULTURAL POLICY

S. Capewell1, M. Mwatsama2, F. Lloyd-Williams2, C. Birt2, R. Ireland2.1Department of Public Health, University of Liverpool, Liverpool, UK; 2Heart of Mersey, Burlington House, Waterloo, Liverpool

Background: The Common Agricultural Policy (CAP) was originally designed to prevent shortages in staple foods. However, over the last two decades CAP subsidies have resulted in decreased intake of fruit and vegetables, and substantially increased consumption of cheap saturated fats from dairy and animal sources (in preference to monounsaturates and polyunsaturates from vegetable sources). Joffe and Robertson recently attributed 48 000 coronary heart disease deaths and 17 800 stroke deaths per decade to CAP associated reductions in fruit and vegetable intake. The impact from fats is less clear.

Objective: To estimate the burden of cardiovascular disease as a result of excess dietary saturated fats attributable to the Common Agricultural Policy.

Methods: A spreadsheet model was used, synthesising data on diet, cholesterol levels, population mortality, and risk factor levels. We conservatively assumed that without CAP subsidies, per capita saturated fat consumption would have been 1% lower, and monounsaturate and polyunsaturate intake each 0.5% higher. We then used published meta-analyses from Clarke (2003) and Law (1994) to estimate the resulting cardiovascular deaths.

Results: In 2000, the 15 European Union member states reported 588 490 coronary heart disease deaths and 391 020 stroke deaths per annum. The stated dietary intake assumptions (1% less saturated fat, 0.5% more monounsaturated and polyunsaturated fat) would have resulted in blood cholesterol levels being approximately 0.06 mmol/l lower. This in turn would have resulted in some 21 420 fewer coronary heart disease deaths (minimum estimate 18 600, maximum estimate 23 675) and 2460 fewer stroke deaths (minimum 1230, maximum 3695) each year. These findings remained stable in a rigorous sensitivity analysis.

Conclusions: Within these conservative assumptions, every decade the CAP has been responsible for approximately 214 000 additional coronary heart disease deaths and 25 000 additional stroke deaths within the European Union. The true figures may be much greater.

28 CHANGES IN THE DISTRIBUTION OF OBESITY IN THE UK ADULT POPULATION 1980 TO 2003

D. Boniface.HBU, Department of Epidemiology and Public Health, University College London, London, UK

Objective: To make use of national health surveys to explore changes in the distribution of body mass index (BMI), waist circumference, and obesity status in the UK adult population.

Method: Secondary analyses of the National Survey of Heights and Weights (1980), the Health and Lifestyle Survey (1984–85), the Diet and Nutritional Survey of British Adults (1987), and the Health Survey for England (1991 to 2003) were used to identify changes in the shapes of the distributions of BMI, waist circumference, and obesity status. The contributions of age, sex, and socioeconomic status were taken into account. Changes in percentile points are illustrated on mean difference plots, and bootstrap confidence intervals indicate statistical significance.

Results: All distributions have shown increased proportions in the upper tails, with the main changes taking place since 1987. Waist circumference has increased proportionately more than BMI, indicating a change in body shape. Adults aged 18 to 44 have increased more than those aged 45 to 64. Socioeconomic status and sex have shown smaller relations with the changes. Adiposity increased with age in all surveys.

Conclusions: The obesity epidemic in the UK began around 15 years ago. Changes in the work and leisure environments over that period can be suggested as causes. There are implications for public health policy.

Parallel session B

Nutrition and health

29 ASSOCIATIONS BETWEEN BREAKFAST CONSUMPTION, FIBRE, AND FRUIT AND RISK OF OESOPHAGEAL ADENOCARCINOMAS, BARRETT’S OESOPHAGUS, AND REFLUX OESOPHAGITIS: RESULTS FROM THE FINBAR CASE–CONTROL STUDY

L. Sharp1, L. Anderson2, A. E. Carsin1, S. J. Murphy2, H. Ferguson3, B. T. Johnston4, R. G. P. Watson4, H. Comber1, J. McGuigan4, J. V. Reynolds5, L. J. Murray2,3.1National Cancer Registry Ireland, Cork; 2Northern Ireland Cancer Registry, Belfast, UK; 3Department of Epidemiology, Queen’s University, Belfast, UK; 4Royal Group of Hospitals, Belfast, UK; 5St James’ Hospital, Dublin, Republic of Ireland

Objectives: The most common subtype of oesophageal cancer in Western populations is adenocarcinoma (OAC). The pathway is thought to be: inflammation (reflux oeosphagitis, RO) – metaplasia (Barrett’s oesophagus, BO) – dysplasia – OAC. OAC aetiology is poorly understood. Eating breakfast has been associated with a reduced risk of the other main oesophageal cancer subtype, squamous cell carcinoma. Whether this association reflects particular breakfast components (for example, fruit, fibre) or other factors or effects is not clear, nor is it clear whether the same holds for OAC and precursor lesions. We investigated breakfast consumption, fibre and fruit intake, and risks of OAC, BO, and RO.

Design: Case–control study.

Setting: Northern Ireland and Republic of Ireland.

Subjects: 226 OAC cases, 224 BO cases, 230 RO cases, and 260 controls. OAC and BO cases had histologically confirmed disease diagnosed between May 2002 and October 2004. RO cases had endoscopically visible inflammation. Controls were recruited from the Northern Ireland population-wide general practitioner register and, in the Republic, through individual general practitioners. Subjects completed food frequency questionnaires. Logistic regression methods were used to calculate adjusted odds ratios (OR) separately for OAC, BO, and RO.

Main outcome measures: Eating breakfast: none, cooked, light (“continental style”, for example fruit, cereals, yoghurt); daily fibre (energy adjusted) and fruit intake.

Results: Eating a light breakfast, v no breakfast, was associated with significantly reduced risks of OAC (odds ratio (OR) = 0.38 (95% confidence interval, 0.19 to 0.75)) and RO (OR = 0.40 (0.21 to 0.76)), and a more modest risk reduction for BO (OR = 0.64 (0.34 to 1.23)). For OAC and RO, but not for BO, eating a cooked breakfast reduced risk slightly compared with no breakfast (OAC: OR = 0.79 (0.47 to 1.32); RO: OR = 0.63 (0.38 to 1.06)). Both fibre and total fruit intake were significantly inversely related to risks of OAC, BO, and RO. Among controls, fibre and fruit intake were related to breakfast consumption. However, the associations between breakfast consumption and oesophageal lesions persisted after adjusting the models for fibre and total fruit; the ORs for light and cooked breakfast were virtually unchanged.

Conclusions: Eating a continental-style breakfast was associated with reduced risks of OAC and precursor oesophageal lesions. These relations did not appear to be explained by fibre or fruit intake. Moreover, a cooked breakfast reduced OAC and RO risks slightly. If confirmed, these results have implications for prevention of oesophageal lesions. Meantime, further investigation of the mechanisms by which this association operates is warranted.

30 HEALTHIER SNACKS? IT’S AMAZING WHAT RAISINS CAN DO

M. Mwatsama1, F. Lloyd-Williams1, R. Ireland1, S. Capewell2.1Heart of Mersey, Burlington House, Waterloo, Liverpool; 2Department of Public Health, University of Liverpool, Liverpool, UK

Background: Chocolate, crisps, and other “unhealthy” snacks have a high saturated fat, salt, and sugar content. Saturated fat and salt are major risk factors for coronary heart disease (CHD), stroke, obesity, and diabetes. Sugar also increases obesity. In 2005, there were approximately 25 billion snacking occasions in the UK. This represents over 410 snacks per person per year. “Unhealthy” snacks accounted for nearly 40% of the market in sales of the total snacking/impulse market: approximately 6.2 billion snacking occasions each year feature a chocolate bar, sugar confectionary product, bag of crisps, or another type of salted snack. We therefore examined the potential mortality impact of replacing these “unhealthy” snacks with “healthy” snacks (defined here as dried fruit, unsalted nuts, and seeds).

Objective: To examine the public health impact on coronary heart disease and stroke mortality if one “unhealthy” snack was replaced by one “healthy” snack per person, per day, across the UK population.

Methods: Nutritional information (saturated fat, salt, sugar, and energy) was obtained for different snack products. The average nutritional values per portion of “healthy snacks” (dried fruit snacks, nuts, and seeds) and “unhealthy snacks” (crisps, sweets, and chocolate) were then calculated. We used the Clarke equation to estimate the mean change in total blood cholesterol levels that would result from reducing saturated fat and salt intake levels as a result of replacing one “unhealthy” snack with one “healthy” one. We then calculated the effect of changing cholesterol and salt levels on CHD deaths using the equations provided by meta-analysis.

Results: The replacement of one “unhealthy” snack a day with a “healthy” snack would result in an average reduction of approximately 4 g saturated fat and 0.3 g salt intake per person per day. This would lead to (a) lower blood cholesterol levels, resulting in approximately 3800 fewer deaths from CHD a year; and (b) a reduced salt intake, resulting in approximately 3000 fewer deaths from strokes.

Conclusions: Unhealthy snacks apparently kill approximately 7000 people every year from CHD and stroke in the UK. However, these calculations are conservative and the overall impact could be even greater. Public health messages should therefore emphasise that even small changes to diet can lead to potentially large benefits.

31 DOES THE SCHOOL FRUIT AND VEGETABLE SCHEME IMPROVE CHILDREN’S DIET? A NON-RANDOMISED CONTROLLED TRIAL

J. K. Ransley1, J. E. Cade1, S. Blenkinsop2, I. Schagen2, D. Teeman2, E. Scott2, D. C. Greenwood1, G. White2, S. Schagen2.1Nutritional Epidemiology Group, Centre for Epidemiology and Biostatistics, University of Leeds; 2National Foundation for Educational Research, Slough, Berks, UK

Objectives: To evaluate the impact of the School Fruit and Vegetable Scheme (SFVS) on children’s consumption of fruit and vegetables and intake of a selection of nutrients.

Design: Non-randomised controlled trial

Setting: Infant and primary schools in the North of England

Participants: 3703 children aged four to six years (reception, year 1, and year 2) at recruitment in February 2004

Intervention: One portion of fruit or vegetable was provided for each child, on each school day between February and December 2004

Main outcome measures: Portions of fruit and vegetables consumed and intake of nutrients

Results: The SFVS was associated with an increase in fruit intake across reception and year 1 of 0.5 (95% confidence interval, 0.2 to 0.5) and 0.6 (0.4 to 0.9) portions at three months, which fell to 0.2 (0.1 to 0.4) and 0.3 (0.1 to 0.6) portions at seven months, respectively. In year 2 it was associated with an increase of 0.5 (0.2 to 0.7) portions more fruit at three months, but intake fell to baseline values at seven months when these children were no longer eligible for the scheme. Overall, at seven months there were no changes in vegetable consumption, no associations between the SFVS and energy, fat, or salt intake, and small changes in carotene and vitamin C intake.

Conclusions: The SFVS promotes a small increase in fruit intake after three months. At seven months the effect remained significant but was reduced, and returned to baseline in year 2 pupils who were no longer part of the scheme. There was a small impact on the intake of some nutrients across the children surveyed.

32 MEAT INTAKE AND CANCER RISK IN THE UK WOMEN’S COHORT

E. F. Taylor, J. E. Cade, V. J. Burley, D. C. Greenwood.Nutritional Epidemiology Group, Centre for Epidemiology and Biostatistics, University of Leeds, Leeds, UK

Background: The UK Women’s Cohort Study was started in 1993 to investigate diet and cancer relations in a group of women in the UK. While our current knowledge on diet and cancer is sufficient to make some broad recommendations, many important questions remain unanswered. In particular, previous research into meat consumption and cancer has shown inconsistent results.

Objectives: To assess the effect of meat consumption on the risk of cancer incidence, looking in particular at differences in risk of development of cancer related to different types of meat consumed, consumption of fresh versus processed meat, and dietary patterns in order to group women together more in terms of whole diets and lifestyles than of nutrient intake.

Methods: The cohort consists of 35 372 women, aged 35–69 years at recruitment, all having been registered with the NHS Central Register for notifications of new cancers or deaths. The cohort was selected to ensure a wide range of dietary intakes and was structured to include approximately one third meat eaters, one third fish (not meat) eaters, and one third vegetarian. This cohort is therefore ideal for exploration of meat consumption. Diet was assessed at baseline between 1995 and 1998, by a 217-item food frequency questionnaire (FFQ). Survival analysis of malignant incident cancers was carried out using Cox proportional hazards regression adjusting for known confounders. Seven dietary patterns were identified using cluster analysis.

Results: There was evidence of a linear trend between total meat consumption and cancer although this was borderline non-significant: high consumption of total meat had a hazard ratio of 1.20 (95% confidence interval, 1.02 to 1.4) (p = 0.06 for trend) compared with the reference category of non-consumers. When investigated further, the relation was significant for both fresh and processed meat, with evidence of linear trends. Associations between processed meat and cancer were slightly stronger than for fresh meat and cancer. In terms of differences in meat type, red meat showed a significant linear trend; however, poultry and offal were both non-significant and gave no evidence of linear trends with cancer. There were no differences in cancer risk by dietary pattern.

Conclusions: Meat intake may be associated with an increased risk of malignant cancers; alternatively, despite adjustment for confounders, some unknown bias may still exist.

Inequalities

33 INCREASING INEQUALITIES IN MORTALITY IN SCOTLAND BY AREA DEPRIVATION, 1981–2001

A. H. Leyland, R. Dundas.MRC Social and Public Health Sciences Unit, Glasgow, UK

Background: All-cause mortality in Scotland has decreased over the past 20 years. The Scottish Executive remains committed to reducing health inequalities (see, for example, Delivering for Health).

Objectives: To assess whether the same changes in all-cause and cause-specific mortality have been experienced by populations living in the most and least deprived areas.

Design: Mortality data for 1980–1982, 1991–1992, and 2000–2002 for the whole of Scotland were compared with populations at each census, separately for men and women. Deprivation level was assessed at the level of postcode sector (mean population ∼5000) using deprivation categories (depcats) based on the Carstairs score relating to each census.

Main outcome measures: Standardised mortality rates were used to assess changes in mortality over time and the relative importance of causes. The extent of the inequality was illustrated through comparison of the most and least deprived areas (depcats 7 and 1: each containing 6–7% of the population).

Results: Male mortality under 65 fell from 505 per 100 000 in 1981 to 384 in 1991, and 340 in 2001. The corresponding rates for women were 293, 229, and 193. Steep deprivation gradients, coupled with slower declines in the more deprived areas, led to a widening of inequalities. In 1981 male mortality in depcat 7 was 128% higher than in depcat 1; by 2001 this excess had increased to 338%. The excess for women increased more modestly, from 112% to 176%. Despite an overall decline of 63% in this age group, ischaemic heart disease (IHD) remained the largest single cause of mortality, accounting for 19% and 10% of male and female mortality, respectively. Excess mortality from IHD increased from 76% to 392% among men and from 156% to 486% among women. Inequalities increased for most causes considered with the exception of suicide (men), breast cancer (women), and accidents (both). All causes showed inequalities, including those in which inequalities did not increase, with the exception of female breast cancer. For certain causes the inequalities are of extreme concern—a tripling of mortality from chronic liver disease among men in the most deprived areas has led to an excess mortality of 1649%.

Conclusions: Despite mortality rates falling nationally, the magnitude of these falls has been lower in more deprived areas. This has resulted in shocking inequalities opening up between more and less deprived areas. Such inequalities are best tackled by targeting the causes propping up the inequalities.

34 DOES POVERTY ALWAYS KILL? POOR AREAS IN BRITAIN WITH RELATIVELY LOW MORTALITY RATES

R. Mitchell1, J. Gibbs1, H. Tunstall2, S. Platt1, D. Dorling3.1Research Unit In Health, Behaviour and Change, University of Edinburgh Medical School, Edinburgh, UK; 2MRC Social and Public Health Sciences Unit, University of Glasgow, Glasgow, UK; 3Department of Geography, University of Sheffield, Sheffield, UK

Objective: To identify areas of Britain with relatively low mortality rates despite experiencing long term economic adversity, and to identify factors that may have contributed to this apparent resilience.

Design: Longitudinal ecological quantitative study to identify areas with lower than expected mortality rates, and qualitative in-depth case studies to identify possible explanatory factors.

Setting: Britain

Participants: All parliamentary constituencies in Britain 1971–2001, and residents therein.

Main outcome measures: Age group specific mortality rates.

Methods and Results: 54 constituencies were identified as having been in persistent adverse economic circumstances in 1971–1991. Controlling for levels of economic adversity, each constituency was assessed to measure the degree and consistency, across ages and time, with which it showed low age group specific mortality rates relative to the 54. A group of 18 constituencies offered strong evidence of relatively low mortality rates, at a wide variety of ages, and over time. These were labelled “resilient”. For those aged 25 and over, average mortality rates in the resilient group were significantly lower (for example, among those aged 45–59, the mortality rate (1996–2001) was 607 per 100 000 in the resilient group, and 728 among the other 36 constituencies (p = 0.013)). In-depth interviews with key local informants, and secondary data sources, were used to profile the areas and seek mechanisms for their apparent resilience to the expected health impacts of economic adversity. We found no single “x factor” to explain the low mortality rates. However, common factors such as the absence of large scale out migration following industrial closure, a unifying community culture, and local policies to prevent decline in neighbourhoods were identified as making a contribution.

Conclusions: There is evidence that some areas are more resilient to the adverse health impacts of persistent economic adversity than others. While there does not seem to be a single “x factor” policy or area characteristic which explains the resilience, the research has identified common ingredients, combinations of which appear to offer some protection to community health under adverse economic circumstances.

35 INEQUALITIES IN DECAYED, MISSING AND FILLED TEETH IN 5-YEAR OLD CHILDREN IN SCOTLAND, 1993–2003

K. A. Levin, G. Topping, N. B. Pitts.Dental Health Services Research Unit, University of Dundee, Dundee, UK

Background: Socioeconomic inequalities in health outcomes such as mortality have widened in Scotland in the last two decades. Previous research suggests there are significant differences between deprivation categories in the prevalence and amount of decayed, missing, and filled teeth (d3mft).

Objective: To describe the variation in obvious decay experience among 5-year-olds in Scotland and to look at the association between d3mft and deprivation in Scotland.

Methods: Data derived from the 1993–2003 National Dental Inspection Programme were modelled using multilevel binomial and Poisson modelling, adjusting for age, sex, and deprivation using the Carstairs indicator.

Results: Adjusting for age and sex, the odds of having d3mft>0 in any year were 0.96 (95% confidence interval 0.96 to 0.97) of that of the previous year. When amount of d3mft for those with d3mft >0 was modelled, there was also a significant reduction over time. The prevalence of children with d3mft is therefore reducing and, for those that have it, there is a reduction in the amount. Deprivation was positively and significantly associated with having d3mft: the odds of a child in DepCat 7 (most deprived) having d3mft in 1993 were 7.37 (5.01 to 10.85) of that of a child in DepCat 1 (most affluent). However, inequalities in the prevalence of d3mft have reduced and in 2003 the odds of a child in DepCat 7 having d3mft were 4.69 (3.56 to 6.17) of that of a child in DepCat 1. Alongside this, socioeconomic inequalities in the amount of d3mft for those with d3mft have increased over time. In 1993 for those in DepCat 7 with d3mft, the relative risk of having an additional d3mft were 1.69 (1.47, 1.94) that of a child in DepCat 1. In 2003 for those in DepCat 7 with d3mft, the relative risk of having an additional d3mft were 1.91 (1.70 to 2.16) of a child in DepCat 1. Similarly, the relative risks of having a proportional increase in d3mft for those in DepCat 5 and Depcat 6 relative to Depcat 1 increased between 1993 and 2003.

Conclusions: Deprivation accounts for much of the variation in the prevalence of d3mft in Scotland. Socioeconomic inequalities in the prevalence in d3mft have decreased in recent years, while socioeconomic inequalities in the amount of d3mft have increased. This suggests that improvements are only seen in those children with low d3mft. Children in more deprived areas with high d3mft continue to have high d3mft.

36 USING GEODEMOGRAPHICS TO ILLUSTRATE HEALTH INEQUALITIES

D. Dedman, T. Hennell, J. Hooper, K. Tocque, M. Bellis.North West Public Health Observatory (NWPHO), Liverpool John Moores University Centre for Public Health, Castel House, North Street, Liverpool, UK

Objective: To examine patterns of health and illustrate inequalities in the North West region of England

Design: Ecological analysis of admission rates in the Office for National Statistics (ONS) lower layer super output areas (LSOAs), using hospital episode statistics (HES).

Participants: Residents of North West England, 1998–2003.

Methods: We used LSOAs as the geographical unit. Numbers of persons admitted to hospital for a range of conditions were obtained for each LSOA, based on the postcode of residence of the patient. Mid-year population estimates (ONS) and deprivation scores (IMD2004) were obtained for each LSOA. LSOAs were also classified according to a geodemographic type using a commercially available segmentation system (P2 People and Places). Data from LSOAs were pooled and age standardised rates were calculated for each geodemographic type. Rates were displayed graphically using income deprivation (from IMD2004) as a ranking variable. The results were compared with those from similar analyses which used deprivation quintile (IMD2004), instead of the geodemographic typology, to classify LSOAs.

Results: As expected, many conditions were strongly related to deprivation. However, for some conditions and in some geodemographic categories admission rates were higher or lower than would be expected from income deprivation alone. For example, admissions for violence were lower than expected in “Country Orchards” and “Multicultural Centres”; admissions for alcohol related conditions were higher in “New Starter” areas; and elective admissions for hip replacement surgery were unexpectedly high in “Country Orchard” areas.

Conclusions: Geodemographic segmentation systems are a potentially useful and relatively unexplored tool for examining patterns of health. Geodemographic typologies also have potential advantages for targeting interventions to specific groups. We present further background on geodemographic systems, and show how this kind of analysis can be used to identify small geographic areas within a given local authority district which are likely to experience particular health problems, and may therefore be targeted by intervention programmes.

Respiratory health

37 SOCIOECONOMIC STATUS, PARTICULATE AIR POLLUTION, AND MORTALITY

M. Carder1, R. Agius1, R. McNamee1, I. Beverland2, R. Elton3, M. Van-Tongeren1, J. Boyd4.1Universities of Manchester; 2Strathclyde; 3Edinburgh; 4Information and Statistics Division of the NHS in Scotland

Background: The link between social deprivation and ill health has been clearly shown in various studies, with populations living in deprived areas showing levels of mortality substantially in excess of those in affluent areas. In the first decade of our study period, these differentials increased in Scotland as inequalities in health widened. However, the relative contribution of environmental and other interacting factors to these differentials in health and mortality has yet to be unravelled. The aim of this study was to determine whether socioeconomic status modified the effect of particulate air pollution on cardiorespiratory mortality.

Methods: Generalised linear Poisson regression models were used to investigate whether socioeconomic status, as measured by the Carstairs category, modified the effect of black smoke on mortality for the two (largest) Scottish cites of Edinburgh and Glasgow, between January 1981 and December 2001. Lag periods of up to one month were considered for the temperature and BS variables.

Results: Socioeconomic status significantly modified the effect of black smoke on mortality, with black smoke effects generally increasing as the level of deprivation increased. The estimated increase in respiratory mortality over the ensuing one month period associated with a 10µg.m−3 increase in the mean black smoke concentration was 8.0% (95% confidence interval, 5.1 to 10.9) for subjects residing in the “most” deprived category, compared with 3.7% (−0.7 to 8.4) for subjects residing in the “least” deprived category.

Conclusions: The results suggest a greater black smoke effect in more deprived populations. If corroborated, these findings have important health implications in that they provide evidence that people who are socially deprived are at increased risk of particulate pollutant related mortality. Sources of pollution tend to be higher in socioeconomically disadvantaged areas and as such this subset of the population who may be at an increased risk of pollutant related mortality might also be experiencing higher pollutant exposure than the general population.

38 RAPID EARLY DISCHARGE IN COPD: A PROSPECTIVE LOCAL EVALUATION WITH CONCURRENT CONTROLS

Y. M. Chang1, A. Clarke2, S. Taylor1, G. Wilson1, R. Sohanpal1.1Public Health and Policy Research Unit, Barts and the London School of Medicine and Dentistry, London, UK; 2Public Health Resource Unit, Oxford, UK

Background: COPD (chronic obstructive pulmonary disease) is a long term condition caused by smoking. Sufferers experience exacerbations which often result in hospital admission. However, patients are thought to prefer care at home and this is encouraged by health systems, keen to reduce costs. We report here the experience of a district general hospital trust in London which aimed to introduce a rapid early discharge scheme (REDS) in one borough with a population of 300 000 people. The scheme involved discharge at three days or less, with visits at home for three days thereafter, and close supervision for up to two weeks. Patients who were concerned could contact REDS staff by telephone during or after the two week period. Systematic reviews suggest that early discharge is safe, but evidence on whether interventions change the likelihood of readmission to hospital is equivocal.

Methods: An anonymised routine data set (ARDS) was used to identify all patients who had experienced a hospital admission for COPD in the year April 2003 to March 2004. The REDS database was used to identify REDS patients from the same time period. The subset of REDS patients was identified within the ARDS, using NHS number or other matching identifiers. Data were cleaned.

Main outcome measures: Length of stay (LOS) and time to next emergency admission.

Results: There were 55 REDS admissions, of whom 32 had a subsequent admission; and there were 173 ARDS patients, of whom 26 had a subsequent admission within the study time period. Median (mean) LOS was 2 (3.6) days for REDS and 5 (8.53) days for ARDS patients, respectively (Kruskal-Wallis: χ2 = 16.7337, df 1, p<0.0001). Median (mean) times to next emergency admission were 38.5 (57.59) days for REDS and 39 (52.8) days for ARDS respectively (Kruskal-Wallis: χ2 = 0.2998, df 1, p<0.584). All distributions were strongly right skewed.

Conclusions: It appears that the early discharge scheme was successful in achieving a shorter length of stay, but not at the expense of decreasing time to next emergency admission.

39 ETHNIC DIFFERENCES IN CHILDHOOD RESPIRATORY HEALTH: FINDINGS FROM THE MILLENNIUM COHORT STUDY

L. Panico, Y. Kelly, M. Bartley, M. Marmot, J. Nazroo, A. Sacker.Department of Epidemiology and Public Health, University College London, London, UK

Background: It is not clear how respiratory morbidity during infancy and early childhood varies across ethnic groups in the UK. This study sought to determine whether reported contact with health services due to respiratory problems during infancy differed across ethnic groups and whether patterns persisted or changed during early childhood.

Methods: Data from the UK Millennium Cohort Study on 18 496 infants were analysed. Parental interviews were conducted when the cohort member was aged approximately 9 months (sweep 1) and 3 years (sweep 2). Questions included contact with primary care services or hospital admissions for respiratory problems (sweep 1), and the occurrence of respiratory symptoms (sweep 2).

Results: 6.4% of all infants (n = 1178) had contact with primary care services because of asthma or wheezing. Black Caribbean infants were more likely (11.7%) and Indian infants less likely (3%) to have contact with primary care services compared with whites (6.9%). Pakistani infants were more likely (2.1%) to be admitted to hospital than whites (0.8%). These differences remained after adjustment for biopsychosocial factors for black Caribbean (odds ratio (OR) = 1.96 (95% confidence interval, 1.21 to 3.18)) but lost statistical significance for Indian infants (OR = 0.48 (0.18 to 1.23)). The proportion of hospital admissions was low (0.9% of sample, n = 164). At age 3, 12.3% (n = 1902) of children had ever had asthma and 20.0% (n = 3030) had wheezed in the past 12 months. Black Caribbean children (18.2%) were more likely and Bangladeshi children (5.0%) less likely than white children (11.6%) to have ever had asthma. Black Caribbean children were more likely (25.5%) and Indian (16.9%) and Bangladeshi children (8.7%) less likely to have had recent wheeze than white children (19.4%). After adjustments, the disadvantage in asthma and recent wheeze for black Caribbeans was mostly explained by socioeconomic factors (asthma, OR = 1.42 (0.96 to 2.09); recent wheeze, OR = 1.18 (0.85 to 1.64)). The Bangladeshi advantage was largely unexplained by our models, although statistical significance was lost (asthma, OR = 0.40 (0.15 to 1.09); recent wheeze, OR = 0.44 (0.18 to 1.09)). The Indian advantage was mainly explained by cultural factors (recent wheeze, OR = 1.04 (0.66 to 1.63)).

Conclusions: Differences exist in health care contact for respiratory problems across ethnic groups during infancy, and these are not explained by a range of known risk factors. Early childhood disadvantage in respiratory morbidity among the black Caribbean groups is due to socioeconomic factors whereas the apparent advantage of some South Asian groups remains mostly unexplained.

40 PREDICTORS OF ASTHMA IN CHILDREN IN IRELAND: A MULTIVARIATE ANALYSIS OF DEPRIVATION AND SOCIAL SUPPORT

N. Fitz-Simon1, U. Fallon1, D. O’Mahony1, G. Bury1, C. Kelleher1, A. Murphy For The Lifeways Steering Group2.1School of Public Health and Population Science, University College, Dublin; 2Department of General Practice, National University of Ireland, Galway, Republic of Ireland

Objective: To examine the impact of sociodemographic characteristics, including social support received by mothers, on the development of asthma in children.

Design: Multivariate logistic analysis of the Lifeways cross generational cohort study of families.

Setting: Three year follow up through general practice of children born in maternity hospitals in two Irish cities: Galway in the west and Dublin in the east of the Republic of Ireland. Babies’ and mothers’ hospital medical records were available and mothers were asked also to answer a standardised previously validated questionnaire at recruitment, containing health status and social and demographic information. Social support was calculated as the sum of seven possible sources providing a good deal of support and categorised as high (>2) or low.

Participants: 1016 babies recruited in the antenatal period born between October 2001 and February 2003. General practice follow up clinical record data by summer 2006 were available for 691 children (68%).

Main outcome measure: Diagnosis of asthma by 3 years of age.

Results: 10.2% of children had a diagnosis of asthma. Children in the middle of the birth weight range (mean = 3502 g, SD = 581 g) were at lower risk of asthma than those of low and high birth weights. Boys were more likely to have asthma than girls (odds ratio (OR) = 2.2 (95% confidence interval, 1.3 to 3.8)). Children born in Galway were less likely to have asthma (OR = 0.49 (0.26 to 0.93)) and children eligible for means tested General Medical Services cards were more likely to have asthma (OR = 2.3 (1.2 to 4.5)). A high level of social support from several sources (partners, parents, other relations, friends, and employers) reported by single mothers was associated with a lower probability of asthma diagnosis at three years of age (p = 0.03), but this effect was not observed in other (partnered) mothers. Sociodemographic and lifestyle variables included in the model which were not significant were mother’s age, income level, medical insurance, education levels of mother, father, and mother’s parents, breast feeding, mother’s smoking status, exposure to smoke in the home, and self reported pollution in the environment.

Conclusions: A diagnosis of asthma is more likely in the children of single mothers who do not report high levels of social support. The level of social support in other mothers does not have an impact on the likelihood of asthma by age 3 years. This suggests that networks of support have an influence on children’s health in settings with higher levels of deprivation.

Methods I

41 ACHIEVING A REASONABLE RESPONSE RATE FROM GPs ON USE OF, AND NEED FOR, REFERRAL GUIDELINES

N. Le Maistre1, A. Clarke2, J. Van der Meulen J1, J. BrownE1.1Clinical Effectiveness Unit, The Royal College of Surgeons of England, 35–43 Lincoln’s Inn Fields, London, UK; 2Public Health Resource Unit, Oxford, UK

Background: Referral guidelines can promote efficient use of resources by helping general practitioners (GPs) carry out their role as gate keepers to secondary care services. We aimed to undertake a national 1% survey of GPs to establish their use of, and need for, referral guidelines as part of an SDO funded REFER project, aimed at developing guidelines for elective surgical referral.

Methods: Systematic reviews were consulted to ensure that methods associated with increased response were used. The referral guidelines questionnaire was designed and piloted, based on previous research and pilot interviews. Representative primary care trusts (PCTs) were selected using the Office for National Statistics “supergroup” data. After achieving MREC, local ethics committee, and PCT research governance approval, PCTs were asked to identify GPs working in their area. A stratified random sample of 30% of GPs was selected (n = 320). An eye catching postcard was sent to sample GPs explaining the study and the questionnaire they were about to receive. Information was included explaining that response could be by post, fax, web based questionnaire, or telephone. The questionnaire itself was sent 10 days later and again at two weeks to non-respondents. After two questionnaires and a further eye catching postcard, non-respondents were sent a personalised letter, inviting response by each different modality and indicating that a member of the study team would be contacting them in the next three weeks. Non-respondents were contacted by telephone in five of the 10 participating PCTs.

Results: Some PCTs held incomplete lists and out of date lists of their constituent GPs. Overall response rate was 40.3% (n = 129). Response rates varied between 25% and 60.9% by PCT. Urban PCTs had the lowest rates. Non-respondents were more likely to be single handed practitioners. Response to the first questionnaire was 21.9%, with a further 11.6% responding to the second questionnaire; 5% responded after receiving the personalised reminder letter. Only five GPs made use of the web based questionnaire, two completed a faxed copy, and none requested a telephone questionnaire. Seventy nine non-respondents were telephoned, resulting in a further six responses from previous non-responders.

Conclusions: Undertaking a national GP survey in the current NHS and research governance climate proved almost impossible. Response rates are too low to draw generalisable conclusions about GPs’ use of, and need for, referral guidelines. Strategies to increase response, and multiple possible modalities of response, did not appear to improve response rates.

42 CONTROL OF ANAEMIA IN HAEMODIALYSIS PATIENTS

K. Harris1, E. J. Will2, C. Tolman2, M. S. Gilthorpe1, R. M. West1.1Biostatistics Unit, Centre for Epidemiology and Biostatistics, University of Leeds; 2Department of Renal Medicine, St James’s University Hospital, Leeds, UK

Background: Epoietin agents, such as erythropoietin β (EPO) or darbepoietin α (DA), are used for the correction of anaemia in haemodialysis patients. The dose of the drug is often managed with the assistance of a computerised decision support system (CDSS), where the aim is to control haemoglobin (Hb) concentration between the limits of 10.5 and 13.0 g/dl. In a recent randomised controlled trial, the two agents EPO and DA were compared. The key research questions were: (1) Could patients be managed with a regimen of a single injection weekly? (2) Do the two agents offer comparable control of Hb?

Materials and methods: In all, 13 monthly Hb measurements were available for 77 and 74 patients, respectively, in the two arms of the trial. Fitting smoothed B-splines to these repeated measures provided useful functional representations, and graphical plots informed the understanding of patient responses. Phase plots of the first v second derivatives of the Hb trajectories (that is, velocity v acceleration of change in Hb levels) identified those patients brought under control by the drug administration system and those in whom the CDSS was less successful. Functional principal components analysis identified common features of the Hb trajectories that contributed to the assessment of the management system and the comparative performance of the two epoietin agents. Functional comparison of the two treatment groups enabled assessment of the required dose conversion and provided a novel means to compare the overall effectiveness of the two agents.

Results: The mean functions of the two groups were maintained between the two control limits, showing that both agents were capable of delivering satisfactory control under this management regimen of weekly injections and CDSS. Closer inspection revealed that the mean Hb functions started together but then separated and returned together after nine months of treatment; this was related to dose conversions. The functional principal components analysis further emphasised this mode of variation and some oscillatory behaviour was also identified.

Conclusions: Functional data analysis has proven a very valuable tool in the analysis of this repeated measures dataset. In particular, phase plots yielded information relating to the control of each patient’s Hb levels. Subtle differences in the effects of the two epoietin agents under the CDSS were noted and quantified. Further work on aspects of control has been motivated by the findings from this study.

43 PERSONAL AND FAMILY NAMES: A PROPOSAL OF THEIR USE BY NHS TRUSTS FOR THE TARGETING OF RESOURCES AND PUBLIC HEALTH COMMUNICATIONS

P. Mateos, R. Webber.Centre for Advanced Spatial Analysis, Department of Geography, University College London, London, UK

Background: Although targeting sectors of the population has been recognised as a key characteristic of most effective health interventions (Department of Health, 2005), traditionally the concept of targeting has been much more readily accepted in business than in the public sector. Businesses typically define target audiences for the products or services that they sell, selecting advertising media and retail locations which are the most cost-effective in reaching those audiences, with the help of geodemographics and GIS tools. Government’s job, by contrast, is traditionally seen as providing a uniform quality of services to all citizens, although more lately the public sector has begun to appreciate the merits of being more selective in the targeting of public communications, as it is crucial in communicating with “hard to reach” population groups. Today public bodies are mandated to recognise the distinct needs of different community groups and to address the cultural, linguistic, and other barriers that result in their low uptake of different services, and sometimes higher prevalence of certain health conditions. The targeting of individuals at higher risk in health promotion campaigns should yield a much higher return on investment in communications than a “scatter gun” approach.

Objectives: In this paper we review the growing analysis of people’s personal and family names to identify their cultural origins, and propose their use as a method to direct certain communications programmes at very specific minority groups.

Methodology: A systematic review of the literature in name analysis and epidemiology was conducted, establishing patterns of similarity in methods employed, constraints and solutions to overcome them, as well as their applications and validity measurement. Three name classifications were selected—Nam Pehchan, SANGRA, and Origins—to test their validity in coding the NHS patient register in Camden PCT by cultural, ethnic, and linguistic group, for the purpose of targeting public health communications to minority population groups at fine scales in inner London. This approach was then compared with more traditional ones using census data.

Results: Accuracy of name analysis at the individual level was measured at a sensitivity of 0.67 to 0.92 and a specificity of 0.67 to 0.99. At the postcode unit level there was a correlation with census ethnicity of 0.40 to 0.83. The probability of the message reaching the prospect audience using names was over five times larger than using census output area information.

Conclusions: Targeting of individual cultural groups is significantly more cost-effective using name analysis than census data.

44 THE IMPACT OF INTERNAL MIGRATION ON POPULATION HEALTH IN SCOTLAND

D. Brown, A. H. Leyland.MRC Social and Public Health Sciences Unit, University of Glasgow, Glasgow, UK

Background: Changes in population health and progress towards targeted reductions in inequalities are assessed on the basis of the comparison of area populations over time. However, population migration introduces problems into the measurement of population health. Migration patterns are not random; populations have been decreasing in deprived (and high mortality) areas and increasing in affluent (and low mortality) areas.

Objective: To examine the relation between migration patterns, deprivation, and population health in Scotland.

Design: 2001 Scottish census data were used to assess population change at the output area level in the year preceding the census. Areas were classified as having experienced a net population increase from elsewhere in the UK (5%+ increase), a net decrease (5%+ decrease), or as having remained stable (<5% total change), with high or low turnover. Net UK population change was also assessed at the 10% level. The Scottish Index of Multiple Deprivation 2004 was used to measure area level deprivation.

Main outcome measures: Age standardised all-cause mortality (2000–2002).

Results: In all, 171 319 deaths and 42 604 output areas (average population = 119) were analysed. The population in 18% of output areas increased by at least 5% while that in 19% decreased. Directly standardised death rates were calculated separately for men and women. In all four area types there was a steep mortality gradient across deprivation quintiles. For increasing populations, mortality in the most deprived quintile was 118% higher for men and 61% higher for women than in the least deprived. Areas with increasing populations had higher mortality than stable populations in all deprivation quintiles for men and women. Mortality was also higher in this group, when compared with decreasing populations, in all but the most deprived quintile, where decreasing populations had markedly higher mortality (mortality rate of 1194 per 100 000 in men and 1139 per 100 000 in women). As a result, excess mortality in the most deprived quintile was highest for decreasing populations: 180% higher for men and 137% higher for women. These differences were exaggerated when population change at the 10% level was considered.

Conclusions: Areas with net population inflow have higher mortality rates than areas of comparable deprivation. Among the most deprived areas, areas with net population outflow have the worst health overall.

Psychosocial health

45 OUTCOMES AND SERVICE USE OF VULNERABLE SERVICE LEAVERS FROM THE BRITISH ARMED FORCES

L. Van Staden, C. French, A. Iversen, S. Wessely.Kings Centre for Military Health Research, New Medical School Building, Denmark Hill, London, UK

Background and aims: Little is known about what happens to veterans when they leave the armed forces, although there has been much publicity of homelessness and more recently of mental ill health within the veteran population. This paper presents the results of a study that investigated the paths of “vulnerable” Service leavers. It was decide that early Service leavers (those who had left the Forces before the end of their Service term) presented a potentially vulnerable subset of the Army population, and this group formed the focus of the study. To this end, those leaving through the Colchester Military Correction and Training Facility (MCTC) were used as the study cohort. Participants were interviewed before departure with regard to their hopes, aspirations, and plans, and then again six months after release.

Methods: Mixed methodology was used, based on semistructured face to face interviews just before discharge, and follow up telephone interviews six months later.

Results: A response rate of 99% was achieved predischarge (n = 111), and 67% for the follow up (n = 74). Compared with the general civilian population, the study population had higher rates of metal ill health, along with high levels of debt, homelessness, temporary housing, and unemployment. Few participants made use of the wide range of services that are available in these areas.

Conclusions: Those leaving the Armed Forces through MCTC face multiple risks and challenges that are often ill addressed. This is due to a combination of lack of knowledge of civilian procedures, bad experiences with services accessed, and/or a continuation of the military culture upon release and its emphasis on self reliance. Current service provision therefore appears inadequate to meet the needs of vulnerable service leavers without increased signposting or more targeted services.

46 “MASCULINITY”, “FEMININITY”, “ANDROGYNY”, AND MORTALITY IN LATE MID-LIFE

K. Hunt, H. Lewars, C. Emslie, D. Batty.MRC Social and Public Health Sciences Unit, University of Glasgow, Glasgow, UK

Background: In the 1970s, androgyny theorists speculated that “androgynous” people (that is, those who do not restrict their behaviour to conform to traditional cultural definitions of sex typed behaviour) will be in better mental health than “sex typed” people (“masculine” men and “feminine” women). Others suggested that they would be in no better health than people with high “masculine” scores, as “masculine” characteristics are highly valued in patriarchal societies. No studies to date have examined these psychological constructs in relation to mortality.

Objective: To assess the relation between masculinity and femininity scores in late mid-life and subsequent mortality

Methods: 1042 men and women aged around 55 years completed the Bem Sex Role Inventory as part of their baseline interview for the West of Scotland Twenty-07 study in 1988. Masculinity and femininity scores were calculated. The interaction between the two scores represents androgyny. All respondents are flagged at the NHS Central Registries. Odds ratios for mortality (up to 30 June 2005, 271 deaths) were calculated from logistic regression models for all-cause mortality. All models adjusted for occupational class and smoking (current, ex-, never smoker).

Results: In models including sex, masculinity, and femininity scores only, only sex was of (borderline) significance (odds ratio (OR) for women = 0.74 (95% confidence interval, 0.53 to 1.01) for all-cause mortality). The androgyny term (masculinity × femininity score) was significant when added to the model for all-cause mortality (OR = 0.77 (0.64 to 0.92)). When the models were considered separately for men and women, all-cause mortality was not related to masculinity, femininity, or androgyny in men, but androgyny was associated with mortality for women (OR = 0.63 (0.47 to 0.85)).

Conclusions: These results raise questions about the way in which sex related self image is related to health. Smoking, an adverse health behaviour with complex links to social constructions of masculinity and femininity, does not explain the association. Other potential explanations are discussed.

47 AN INVESTIGATION OF THE RELATION BETWEEN HEALTH AND PERCEPTIONS OF DIFFERENT COMPONENTS OF ENVIRONMENTAL JUSTICE

A. Ellaway1, J. Curtice2, G. Morris3, R. Robertson3, C. Robertson3, G. Allardice3.1MRC Social and Public Health Sciences Unit; 2NatCen; 3Health Protection Scotland

Background: Concern about the impact of the environment on health and wellbeing has tended to focus on the physical effects of exposure to toxic and infectious substances. The focus has been on major infrastructure (for example, power stations) rather than on more subjective perceptions of street level aspects of neighbourhoods such as litter and graffiti, which are increasingly regarded as components of environmental justice. It has been suggested that more attention should be focused on psychosocial impacts (such as feelings about the quality of the local environment) and health and wellbeing.

Objective: As little is known about the relative importance upon health of perceptions of different components of environmental justice, we developed a module for inclusion in the 2004 Scottish Social Attitudes survey to investigate this.

Design and setting: A random sample of 1637 adults across a range of neighbourhoods in Scotland was interviewed. Respondents were asked to rate their local area on the amount of, and perceived problem with (both currently or potentially present), a range of aspects of their local neighbourhood. These were subsequently grouped into three domains: (1) “street level incivilities” (for example, litter, graffiti), (2) large scale infrastructure (for example, phone masts), and (3) the absence of “environmental goods” (such as safe play areas for children). For each of the three domains of perceived local environment, we examined their prevalence and the extent to which they presented a problem according to respondents’ individual characteristics and neighbourhood deprivation; we then explored relations between reported experience of each of the three domains and self assessed health and current smoking status (after controlling for sex, age, and social class).

Results: Respondents with the highest levels of perceived “street level incivilities” were almost twice as likely to report frequent feelings of anxiety and depression, and 40% more likely be a smoker, compared with those who perceive the lowest levels. Perceived absence of “environmental goods” was associated with increased anxiety (2.5 times more likely) and depression (90% more likely) and a 50% increased likelihood of being a smoker. Few associations with health were observed for perceptions of the larger scale infrastructural items.

Conclusions: Environmental policy needs to give more priority to reducing the incidence of “street level incivilities” and the absence of “environmental goods”, both of which appear to be more important for health than perceptions of larger scale infrastructural items.

48 THE RELATION BETWEEN ACCULTURATION AND OBESITY RISK AMONG ASIANS AND HISPANICS IN CALIFORNIA

H. Lee, R. Steinbach.Public Policy Institute of California, San Francisco, California, USA

Objective: Across various health indicators, a growing body of research in the USA has documented a pattern of deteriorating health profiles with greater acculturation among immigrants, particularly Hispanic subgroups. Using survey data from California, one of the most racially/ethnically diverse states in the country, this study examines the relation between acculturation and high body mass index (BMI) among two large racial/ethnic minority groups—Hispanics and Asians. While recent research suggests that obesity risk increases with greater length of residency in the USA among Hispanics, to our knowledge no study has looked at the relation between acculturation and obesity risk among Asians. Furthermore, this study contributes to the extant literature by exploring how neighbourhood context characteristics, in addition to individual factors, mediate BMI differences found among Asians and Hispanics.

Data sources: 2003 California Health Interview Survey

Main outcome measures: Body mass index; obesity

Results: Length of time in the USA was strongly related to higher BMI. Among Hispanics, 20% of new immigrants (<5 years in the USA) were obese, whereas almost 30% of native-born Hispanics were obese. Among Asians, ∼3% of recent immigrants were obese v 11% of their native-born counterparts. Even after adjusting for sociodemographic and neighbourhood factors, greater length of residency appeared to diminish the more advantageous BMI profiles. We also found differences in BMI patterns by country of origin, such that South Americans had lower BMIs than other Hispanics, and Filipinos had higher BMIs than other Asian subgroups. The introduction of neighbourhood characteristics—such as poverty rate, racial composition and food availability—did little to help explain these patterns.

Conclusions: Our findings regarding greater obesity risk by duration of US residency are consistent with previous research on Hispanics. Although Asians have much lower average BMI than other race/ethnic groups, the pattern of higher BMI by US residency is true for Asians as well. Perhaps this is an indication that the BMI distribution among Asians becomes more normal with greater US acculturation, as opposed to a sign of health deterioration. Yet some evidence indicates that the negative health risks associated with higher BMI occurs at a lower threshold for Asians than for other groups, suggesting that there may be cause for concern. Although we cannot establish causal relations, our findings suggest that the US context is associated with higher weight status among immigrant groups, and this holds true for Hispanics and Asians alike.

Parallel session C

Diabetes and cardiovascular diseases

49 AN 11 YEAR FOLLOW UP STUDY OF MORTALITY AND CAUSE OF DEATH IN 11 125 PATIENTS DIAGNOSED WITH TYPE 2 DIABETES AND A NON-DIABETIC COMPARATOR GROUP

K. N. Barnett1, S. A. Ogston1, M. E. T. McMurdo2, J. M. M. Evans1.1Division of Community Health Sciences, University of Dundee; 2Division of Medicine and Therapeutics, University of Dundee, Dundee, UK

Objective: To compare mortality and primary cause of death of patients diagnosed with type 2 diabetes with a non-diabetic comparator group.

Design: An observational cohort study was conducted using electronically linked data obtained from the DARTS (Diabetes Audit and Research in Tayside Scotland) database. DARTS holds all clinical information on patients diagnosed with diabetes from 1993. This database is electronically linked to death certificate information using a CHI number which is issued to all patients when they register with a GP in Tayside. Patients newly diagnosed with type 2 diabetes between 1993 and 2005 were identified. Non-diabetic patients were selected from all patients registered in general practice using the CHI number and matched for age, sex, and practice. Two survival analyses were conducted using Cox regression, examining differences in mortality and primary cause of death (as given on the death certificate) between newly diagnosed diabetic patients and non-diabetic patients.

Setting: Tayside region of Scotland, population 387 908 on 30 June 2004.

Main outcome measures: Hazard ratios for mortality and primary cause of death.

Results: There were 11 125 newly diagnosed diabetic patients and 22 250 non-diabetic patients in the cohort study, of whom 1177 (16%) diabetic patients and 2796 (13%) non-diabetic patients died during the 11 year follow up. The overall hazard ratio (HR) for all causes of death for patients diagnosed with type 2 diabetes compared with non-diabetic patients was 1.32 (95% confidence interval, 1.25 to 1.40), controlling for deprivation. The HR decreased as age at diagnosis increased, with patients diagnosed aged 75 or older having an HR of only 1.20 (1.11 to 1.32), compared with 5.71 (2.67 to 12.21) for patients diagnosed before the age of 45 years. Patients with type 2 diabetes had a significantly higher HR of primary cause of death from (1) neoplasms, (2) endocrine, nutritional and metabolic disorders, (3) diseases of the circulatory system, (4) diseases of the skin, and (5) external causes of mortality, and a reduced HR from mental/behavioural disorders. The HRs of mortality from neoplasms and diseases of the circulatory system also reduced with age at diagnosis

Conclusions: Following diagnosis, patients with type 2 diabetes have an increased risk of death compared with their non-diabetic counterparts, but the hazard ratio of mortality and primary cause of death from certain conditions decreases with increasing age at diagnosis.

50 DIFFERENCES IN HYPOGLYCAEMIA RATES IN TYPE I AND TYPE II DIABETES: IMPLICATIONS FOR POLICYMAKERS

J. Freeman1, S. Heller2, P. Choudhary3, C. Emery2.1ScHARR, University of Sheffield; 2Academic Unit of Diabetes, University of Sheffield, Sheffield, UK; 3King’s College London School of Medicine, London, UK

Objective: Limits on employment and driving are applied to all individuals with insulin treated diabetes, regardless of type, owing to the potential risk of impaired performance during hypoglycaemia. Such restrictions affect the quality of life of people with diabetes and are inappropriate in those groups whose risk of hypoglycaemia is low. In view of this we assessed whether there were differences in hypoglycaemia rates between people with type I and type II diabetes, in particular those new to insulin.

Design: Prospective epidemiological study with 12 month follow up.

Setting: Subjects recruited from six diabetes centres across the UK.

Subjects: 381 people with diabetes: three type II diabetes groups (sulphonylurea (tablet) treated (n = 108); insulin treated for less than two years (n = 89); and taking insulin for over five years (n = 77)), and two groups with type I diabetes (diagnosed within five years (n = 50); on insulin for over 15 years (n = 57)).

Main outcome measures: Frequency and severity of biochemical hypoglycaemia measured by continuous glucose monitoring at baseline and after one year, and self reported hypoglycaemia rates measured over a one year follow up.

Results: Rates of hypoglycaemia measured biochemically and by self report were higher in people with type I diabetes than in those with type II diabetes of similar duration. In particular, recently diagnosed patients with type I diabetes had more episodes of biochemical hypoglycaemia (around 10-fold increase) and more episodes of self reported hypoglycaemia (median rate 22 v 1 episodes/year, p<0.001) when compared with people with type II diabetes recently started on insulin. However, for this latter group there were no differences in the proportions experiencing biochemical hypoglycaemia (20% v 22% (95% confidence interval for the difference, −13% to +9%)) or severe self reported hypoglycaemia (7% v 7% (−8 to +7%)) when compared with people with tablet treated type II diabetes.

Conclusions: Individuals with type II diabetes initiating insulin treatment are at no greater risk of hypoglycaemia than those treated with tablets, and are at considerably lower risk of hypoglycaemia than recently diagnosed patients with type I diabetes, for at least the first two years. Given the limits currently placed on all people with insulin treated diabetes, it is important that these findings are noted by policymakers responsible for determining restrictions on employment and driving.

51 REPERFUSION IN ACUTE MYOCARDIAL INFARCTION: WHERE AND WHEN ADMINISTERED AND OUTCOME IN 40 000 PATIENTS

R. R. West1, J. S. Birkhead2, C. F. M. Weston on Behalf of MINAP Steering Group3.1Wales Heart Research Institute, Cardiff, UK; 2Northampton General Hospital, Northampton, UK; 3Singleton Hospital, Swansea, UK

Background: Evidence from large randomised trials of benefit of early over late thrombolysis in acute myocardial infarction (MI) has led to practical means of reducing time to thrombolysis, initially “door to needle” time and more recently “call to needle” time including protocols for prehospital thrombolysis by paramedics.

Objective: To compare 30 day outcome of MI patients according to where and when thrombolysis was administered.

Method: Analysis of myocardial infarction national audit project (MINAP) data of patients reperfused, where location of reperfusion, time to reperfusion, and 30 day vital status were known, adjusting for principal potential confounders of age, sex, and past medical history.

Results: In 2003–2004 in England and Wales, MINAP recorded some 40 000 reperfused MI patients, for whom location of reperfusion (before admission, accident and emergency (A&E), coronary care unit (CCU) direct, CCU indirect, and elsewhere in hospital), time to reperfusion, and 30 day vital status were known. Median times from onset of symptoms to reperfusion were 98, 157, 157, 288, and 232 minutes, respectively. Median times from call for help to reperfusion were 40, 69, 73, 156, and 103 minutes, respectively. Thirty day mortality—4.6%, 9.9%, 10.8%, 15.8%, and 14.4%, respectively—was very highly correlated with both time from call to reperfusion (R2 = 0.895) and onset to reperfusion (R2 = 0.936). Multivariate analysis showed that significant differences in 30 day mortality between locations were maintained after adjustment for age (five year age groups), sex, and previous medical history (MI, hypertension, hypercholesterolaemia, diabetes, angina). Thus, despite case mix differences and conservative inclusion criteria in prehospital thrombolysis protocols, early administration appears to be independently associated with improved 30 day outcome.

Conclusions: This large observational study adds further support for early reperfusion and now, since much of the “door to needle” time gain has been achieved with direct admission to CCU or treatment in A&E, for the potential benefits of prehospital thrombolysis.

52 PROFILING UK HOSPITAL MORTALITY RATES FOR ACUTE CORONARY SYNDROME PATIENTS USING THE MYOCARDIAL INFARCTION NATIONAL AUDIT PROJECT DATABASE

S. O. M. Manda1, C. P. Gale2, A. S. Hall2.1Biostatistics Unit, Centre for Epidemiology and Biostatistics; 2Academic Unit of Cardiovascular Medicine, University of Leeds, Leeds, UK

Background: Quality assurance and audit are central to good medical practice. They allow targeting of resources in an attempt to minimise the “postcode lottery of care”. Although the implementation of the National Service Framework for Coronary Heart Disease has underpinned substantial improvement in the care of patients with acute coronary syndromes (ACS), geographical variation remains evident today. An important factor for national variation in mortality post-ACS has historically been attributed to case mix. Risk scores have been developed to identify patients who are at increased risk of adverse outcome and who may require additional care.

Objective: To quantify the excess mortality after accounting for case mix using a validated mortality risk model.

Methods: The Myocardial Infarction National Audit Project (MINAP) collects information on patients admitted to hospital in England with ACSs. The Evaluation of the Methods and Management of Acute Coronary Events (EMMACE) risk model uses patient age, admission heart rate, and systolic blood pressure to provide the probability of 30 day mortality post-ACS. Using EMMACE we derived standardised mortality ratios (SMRs) by hospitals in England and obtained risk adjusted mortality rates. We considered overall survival and modelled it using Cox proportional hazard regression with unobserved hospital random effects (hospital frailty) to control for the unobserved mortality risk.

Results: We identified substantial institutional variation in 30 day SMR, and confirmed excess mortality above that due to case mix. In addition, frailty analysis showed that there were differences in the unobserved baseline hospital risk.

Conclusions: We have assessed the excess hospital mortality across the entire spectrum of ACS and after taking into account potential deficits in the EMMACE risk model, there seem to risk factors that may influence outcome. Explanations for this include variation in institutional processes rather than patient characterises.

53 REGIONAL VARIATION IN THE PREVALENCE OF DIABETES MELLITUS: ASSOCIATIONS WITH CALORIE SUPPLY AND THE PREVALENCE OF LOW BIRTHWEIGHT

G. T. H. Ellison.St George’s University of London, London, UK

Background: It is suggested that undernutrition in utero increases susceptibility to type II diabetes mellitus (DM), particularly among adults who subsequently become obese. These analyses aimed to investigate whether the prevalence of low birthweight might explain global variation in DM.

Materials and methods: Estimates of diabetes mellitus (type I and II) cases were obtained for 189 countries in 1995 and 2025 from the 1997 World Health Report and were converted to cases per 1000 using 1996 population estimates and the 1980–1995 average annual population growth rate. Per capita calorie supply estimates (as a percentage of recommended daily allowances, RDA) between 1988 and 1990, and the estimated percentage of low birthweight infants between 1990 and 1994, were obtained for 105 of these countries from The State of the World’s Children 1997.

Results: After controlling for average annual population growth rate to adjust for differences in age structure, there were substantial regional differences in the global distribution of DM for both 1995 (F = 30.65; df = 5105; p<0.001) and 2025 (F = 36.40; df = 5105; p<0.001): African countries had the lowest estimated marginal mean prevalence (1995, 5.7; 2025, 7.4 cases per 1000), while those classified as Western Pacific (1995, 27.4; 2025, 38.6 cases per 1000) and European (1995, 28.7; 2025, 40.7 cases per 1000) had the highest. The 1995 prevalence of DM increased with increasing per capita calorie supply (B(SEM) = 0.014 (0.002); p<0.001), but decreased with an increasing percentage of low birth weight infants (B(SEM) = −1.070 (0.147); p<0.001). While appearing to contradict the “fetal origins” hypothesis, this reflects the lower percentage of low birthweight infants in countries with a higher per capita calorie supply (B(SEM) = −0.008 (0.001); p<0.001). Separating countries into those with a relatively high (>110% RDA, “HF”) or low (<110% RDA, “LF”) per capita calorie supply, and those with a relatively high (>10%, “LBW”) or low (<10%, “NBW”) percentage of low birthweight infants, revealed significant differences in DM prevalence for both 1995 (F = 15.68; df = 3105; p<0.001) and 2025 (F = 11.84; df = 3105; p<0.001): HF-NBW countries had the highest estimated marginal mean prevalence of DM in 1995 (26.1 cases per 1000); HF-LBW countries were close behind (23.1 cases per 1000) and showed the highest projected increase in prevalence (41.6%), so that by 2025 these HF-LBW countries are projected to have the highest prevalence of DM (32.7 cases per 1000).

Conclusion: These results suggest that a high prevalence of low birthweight exacerbates the impact of a high per capita calorie supply on the prevalence of diabetes mellitus.

Evaluation

54 THE ADMISSION OF OLDER PEOPLE TO NURSING AND RESIDENTIAL HOMES IN NORTHERN IRELAND—A RETROSPECTIVE STUDY OF THE VARIATIONS AND DETERMINANTS

E. Pye, D. O’Reilly.Department of Epidemiology and Public Health, Queen’s University, Belfast, UK

Objectives: To determine the relative contribution of individual, household, community, geographic, and Community Trust factors to explaining the variation in nursing and residential home admission rates for older people (⩾65) in Northern Ireland.

Design: This retrospective cohort study used data from the Data Retrieval in General Practice (DRGP) project to identify a cohort aged 65 and over not living in institutional care at April 2000. This population was observed over the following five years to identify those who were subsequently admitted to a nursing or residential home. Logistic regression analysis was undertaken to determine the factors associated with this transition.

Subjects: 28 656 individuals aged 65 or over not living in institutional care at April 2000.

Primary outcome measure: Admission to a nursing or residential home within the five year study period.

Results: Multivariable logistic regression analysis showed that the likelihood of admission increased with increasing age, with those aged 85 and over being 16 times more likely to be admitted than those aged 65 to 69. Women were 61% more likely than men to be admitted, and the female to male ratio increased with increasing age. Those with poorer health (eight or more conditions) were more than twice as likely as their healthier counterparts (no or one condition) to be admitted. People living alone were 36% (95% confidence interval, 21% to 54%) more likely to be admitted than those who lived with one or more other adults. Those living in semiurban areas were most likely to be admitted. Significant variation in admission rates was also evident at Community Trust level.

Conclusions: Individual, household, geographic, and Community Trust factors were found to contribute to the variation in nursing and residential home admission rates for older people in Northern Ireland. Further work is being conducted to determine the specific health conditions associated with admission (such as dementia, incontinence, or hip fracture), and whether variation in admission rates according to Community Trust are due to differences in supply, consumer demand, or Trust practices and policies.

55 THE WINCHESTER FALLS PROJECT: A CLUSTER RANDOMISED COMMUNITY INTERVENTION TRIAL OF SECONDARY PREVENTION OF FALLS IN COMMUNITY DWELLING OLDER PEOPLE

S. George1, C. Spice2, W. Morotti3, J. Rose3, S. Harris1, C. Gordon2.1Public Health Sciences and Medical Statistics, University of Southampton, Southampton, UK; 2Royal Hampshire County Hospital, Winchester, UK; 3Mid-Hampshire Primary Care Trust

Background: Falls in community dwelling older people are common and have significant consequences. Recurrent fallers are particularly at risk of harm. It is unclear how best to reduce falls in recurrent fallers who have not presented to an emergency department.

Objective: To assess the effectiveness of two different interventions, one in primary care and one in secondary care, in preventing further falls in recurrent fallers identified in the community.

Design: Cluster randomised controlled trial, clustered by general practice.

Setting: 18 general practices in and around Winchester, and one general hospital.

Main outcome measures: Falls and their sequelae in the year following recruitment

Subjects: Subjects were aged 65 years or over, lived in the community, had had two or more falls in the previous year, and had not presented to an accident and emergency department.

Interventions: Practices were randomly allocated to one of three groups. Patients were identified through practice nurses, district nurses, health visitors, ambulance paramedics, and general practitioners. The primary care group received a nurse led assessment in the community, targeted at identified risk factors. The secondary care group underwent a structured multidisciplinary assessment in a day hospital. The control group received usual care. Follow up was for one year. Data were analysed using intention to treat (ITT) analysis. Multiple regression techniques were used with adjustment for potential confounders and for clustering at practice level.

Results: 505 participants were recruited: 465 (92%) completed follow up or fell during follow up. The mean age of participants was 82 years. Assuming all those last to follow up fell, the proportion of participants who fell during the study was significantly less in the secondary care group (74.8%,157/210) than in the control group (83.6%,133/159) (odds ratio (OR) = 0.50 (95% confidence interval, 0.33 to 0.77), p = 0.001). The primary care group showed similar results to the control group (86.0%, 116/136; OR = 1.04 (0.531 to 2.11), p = 0.923). No other significant results were found in the ITT analysis, but using observed data significant reductions were found in the secondary care group in fractures, fall related hospital admissions, and deaths. Differences are probably due to differences in medication review between primary and secondary care arms (medication change, 16.3% in the primary arm and 51.6% in the secondary arm).

Conclusions: A secondary care structured multidisciplinary assessment of recurrent fallers significantly reduced the risk of further falls, but a primary care based targeted assessment did not. More research is needed on different ways to undertake medication review.

56 DISCRIMINATION EXPERIENCED BY PEOPLE LIVING WITH HIV IN LONDON

J. Elford1, J. Anderson2, F. Ibrahim1, C. Bukutu1.1City University London; 2Homerton University Hospital, London, UK

Background and objective: People living with HIV have experienced stigma and discrimination since AIDS was first reported in 1981. In the UK, the two groups most affected by HIV are gay men (mostly white) and black African heterosexual men and women. This paper examines the extent to which people living with HIV in London in 2004–2005 experienced discrimination as a result of their infection.

Methods: The majority of people diagnosed with HIV in the UK receive their clinical care in NHS outpatient clinics. Consequently an NHS clinic sample is broadly representative of all those living with diagnosed HIV. Patients with HIV infection attending NHS outpatient clinics in East London between June 2004 and June 2005 were invited to participate in the study. Those who agreed to participate completed a confidential, self administered questionnaire. The questionnaire sought information on a number of socioeconomic variables including ethnicity and sexual orientation. Respondents were asked: “Have you ever been treated unfairly or differently because of your HIV status—in other words discriminated against?”. Those who answered “Yes” were then asked: “By whom?”

Results: During the study period, 2680 patients with HIV attended the outpatient clinics in the six participating hospitals, of whom 2299 were eligible for the study and 1687 completed a questionnaire (response rate 73% of eligible patients, 63% of all patients). Of the 1687 respondents (median age 38 years), 480 were black African heterosexual women, 224 black African heterosexual men, and 758 gay or bisexual men (646 white, 112 ethnic minority). Overall 475 respondents (30%) said they had been discriminated against because of their HIV status; gay men 34%, African women 27%, African men 21% (p = 0.001). People who had been diagnosed longer were more likely to have experienced discrimination (adjusted odds ratio (aOR) = 1.10 per year (95% confidence interval, 1.06 to 1.12), p<0.001) as were those whose body showed signs of HIV (for example, lipodystrophy) (aOR = 2.07 (1.61 to 2.66), p<0.001) or who used voluntary services (aOR = 1.83 (1.36 to 2.46), p<0.001). Of the 475 respondents who reported being discriminated against, 238 (50%) said this had involved a health care worker including their dentist (n = 121, 26%) or their GP (n = 85, 18%).

Conclusions: Nearly one third of people living with HIV, surveyed in London clinics in 2004–2005, said they had been discriminated against because of their infection. Half of those experiencing discrimination said this involved a health care worker. Tackling HIV discrimination in the health care setting should be given priority.

57 DEVELOPMENT AND VALIDATION OF SELF COMPLETION MEASURES OF ATTITUDES TOWARDS EATING BREAKFAST AND BREAKFAST EATING BEHAVIOURS IN 9–11 YEAR OLD SCHOOLCHILDREN

G. Moore, K. Tapper, S. Murphy, R. Lynch, C. Pimm, L. Raisanen, L. Moore.Cardiff Institute of Society, Health, and Ethics, Cardiff University, Cardiff, UK

Background: Efforts to evaluate interventions to improve children’s dietary behaviour are hampered by the limited availability of validated outcome measures that are suitable for use in large scale randomised trials, and among primary schoolchildren.

Objective: To develop and validate a dietary recall questionnaire, designed for group level comparisons of foods eaten at breakfast, and a questionnaire to measure children’s attitudes toward eating breakfast.

Design: Cross sectional (validity) and longitudinal (reliability) survey.

Setting: 58 primary schools located in deprived areas of Wales.

Subjects: Years 5 and 6 pupils in these 58 schools. The attitudes questionnaire was piloted with 199 children. Both measures were then completed by 2382 children. Dietary recall interviews were conducted with a subset of 374 children. Reliability of the dietary recall questionnaire was assessed using 29 schools, with 1233 children at baseline and 1033 at follow up.

Methods: Both measures were administered to a large sample of children, from which subsamples were randomly selected (1) to complete a dietary recall interview, or (2) their parents to complete a questionnaire relating to their child’s breakfast eating habits. Validity of the recall questionnaire was assessed relative to the 24 hour dietary recall interviews and reliability by comparing responses at baseline with four month follow up data.

Results: The 13 item attitudes scale showed good construct validity, high internal reliability, and acceptable test–retest reliability. Children who did not skip breakfast showed more positive attitudes than children who skipped breakfast. Positive attitudes toward breakfast were significantly correlated with consumption of a greater number of “healthy” foods for breakfast (for example, fruit, bread, cereal, milk products), consumption of fewer “unhealthy” foods for breakfast (sweet items, crisps), and parental perceptions that their child usually ate a healthy breakfast. On the dietary recall questionnaire, results indicated moderate to substantial agreements for foods eaten at breakfast on the day of reporting and fair to moderate agreements for the previous day. Correlations were moderate in terms of “healthy” and “unhealthy” items consumed at breakfast on the day of reporting, but weaker for the previous breakfast. The measure demonstrated fair to substantial group level reliability.

Conclusions: The breakfast attitudes questionnaire is a robust measure that is relatively quick to administer and simple to score. The dietary recall questionnaire, gives an adequately valid and reliable overview of selected aspects of children’s diet. Both measures offer significant potential for the evaluation of school based interventions aiming to address breakfast eating.

58 RELATION OF HEIGHT, WEIGHT, AND BODY MASS INDEX TO THE RISK OF PRIMARY HIP AND KNEE REPLACEMENTS

B. Liu1, A. Balkwill1, E. Banks2, C. Cooper3, J. Green1, V. Beral1.1CRUK Epidemiology Unit, University of Oxford, Oxford, UK; 2National Centre for Epidemiology and Public Health, Australian National University, Canberra, Australia; 3MRC Epidemiology Resource Centre, University of Southampton, Southampton, UK

Objectives: To examine the effect of height, weight, and body mass index (BMI) on the risk of primary hip and knee replacement in middle aged women.

Design: Prospective population based cohort study (the Million Women Study).

Setting: Women recruited from breast screening clinics from 1996–2001 in England and Scotland.

Participants: 343 920 women aged 50–69 years old at recruitment into the Million Women Study who returned a follow up questionnaire 2.9 years later.

Main outcome measures: Incident self reported primary hip and primary knee replacement.

Results: The rate of primary hip replacement was 1.6 per 1000 person-years (n = 1572). The rate of primary knee replacement was 0.8 per 1000 person-years (n = 773). After controlling for age, deprivation index, region of recruitment, and BMI where appropriate, increasing height, weight, and BMI were all found to increase the risk of primary hip and primary knee replacement. Comparing the tallest women (⩾170 cm) with the shortest (<155 cm) the relative risk of hip and knee replacement was 1.76 (95% confidence interval, 1.38 to 2.24) and 1.57 (1.13 to 2.19), respectively. Comparing the heaviest women (⩾75 kg) with the lightest (<60 kg), the relative risk of hip and knee replacement was 2.48 (2.08 to 2.96) and 8.63 (6.10 to 12.25), respectively. Comparing women with the highest BMI (⩾30 kg/m2) with the lowest (<22.5 kg/m2) the relative risks of hip and knee replacement were 2.65 (2.19 to 3.22) and 10.91 (7.50 to 15.87), respectively. These effects did not vary significantly when examined in various subgroups.

Conclusions: Height, weight, and body mass index all increase the risk of hip and knee replacement in middle aged women. The risk conferred by height and weight is similar for hip and knee replacement, although the magnitude of the effect of BMI is significantly greater on knee replacement than hip replacement.

Health services research

59 “SHOULD I STAY OR SHOULD I GO?” MAKING THE DECISION ABOUT WHETHER TO GO TO HOSPITAL AFTER A 999 CALL

A. M. Porter1, H. A. Snooks1, R. Whitfield2, A. Youren1, S. Gaze1, F. Rapport1, M. Woollard2.1CHIRAL, School of Medicine, University of Swansea, UK; 2PERU, Welsh Ambulance Service NHS Trust

Background: Ambulance crews responding to 999 calls generally take patients to hospital, but in up to 30% of cases, the patient is not conveyed. In most UK ambulance services, crews follow guidance based on a principle of patient autonomy: the patient can decide to stay home, assuming they are competent. Previous studies have raised doubts about whether the process is, in fact, as clear cut as this. Non-conveyance decisions potentially involve clinical risk and risk of litigation if things go wrong.

Objective: To address the following questions: Do crew members feel it is appropriate for some patients to stay at home following a 999 call? In practice, who do crews feel makes the decision that the patient should stay at home? Do crew members feel that ambulance service protocols, skills, and training about non-conveyance provide adequate support?

Method: Three focus groups with a total of 25 ambulance crew members in one UK ambulance service. Interviews were tape recorded, transcribed, and then analysed thematically.

Results: The process of “refusal” to travel was more complex and varied than policy tended to acknowledge, and commonly took the form of negotiated/shared decision making. There were tensions and ambivalent feelings on the part of crew members in relation to non-conveyance. Crews wanted increased power in decision making, but were concerned about being held responsible.

Conclusions: The results suggest that the Trust needs to review and clarify the current policy for non-conveyance. This situation presents a new perspective on shared decision making.

60 SOCIOECONOMIC INEQUALITY IN SMALL AREA USE OF ELECTIVE TOTAL HIP REPLACEMENT IN THE ENGLISH NHS IN 1991 AND 2001

R. Cookson1,2, M. Dusheiko2, G. Hardman2.1School of Medicine, Health Policy and Practice, University of East Anglia; 2Centre for Health Economics, University of York, UK

Objective: To compare socioeconomic inequality in small area use of elective total hip replacement in the English NHS in 1991 and 2001.

Design: A population study using routine hospital data from records aggregated to small areas using a common geography of frozen 1991 wards.

Subjects: Adults aged over 44 recorded in hospital episode statistics as having undergone elective total hip replacement in an English NHS hospital in either 1991/2 or 2001/2.

Main outcome measures: Utilisation rate ratio between top and bottom Townsend deprivation quintile groups, indirectly standardised for age and sex, and concentration index of deprivation related inequality in standardised utilisation ratios between small areas.

Results: In each year, there was a clear socioeconomic gradient that flattened in the more affluent half of the distribution. Standardised utilisation ratios for the most and least deprived quintile groups were 0.804 (95% confidence interval, 0.784 to 0.824) and 1.135 (1.103 to 1.168) in 1991, and 0.843 (0.825 to 0.861) and 1.075 (1.049 to 1.102) in 2001. The corresponding rate ratios were 1.412 and 1.275. The proportionate increase in use required to bring the bottom quintile to the level of top thus fell from 41.2% to 27.5%. The concentration index of deprivation related inequality between small areas fell from 0.069 (0.059 to 0.079) in 1991 to 0.060 (0.050 to 0.071) in 2001.

Conclusion: There is substantial socioeconomic inequality in use of NHS elective total hip replacement that has not changed much from 1991 to 2001 but may have decreased slightly.

61 ASSAULTS WITH A SHARP OBJECT RESULTING IN HOSPITAL ADMISSION IN ENGLAND, 1997 TO 2005

R. Maxwell1,2, C. Trotter1,2, J. Verne2, P. Brown2, D. Gunnell1.1Department of Social Medicine, University of Bristol; 2South West Public Health Observatory, Bristol, UK

Background: Anecdotal evidence suggests that we live in an increasingly violent society, and that serious assaults are rising. While firearms are still comparatively hard to obtain by members of the public, sharp objects—for example knives, razors, and broken bottles—are freely available and there is widespread concern that their use in violent assaults is increasing. If serious assaults are rising then there should be a corresponding increase in the annual number of patients admitted to hospital where assault with a sharp object has been recorded on their admission records.

Objectives: To investigate and describe recent trends in assaults with sharp objects using national data on hospital admissions.

Methods: Data on hospital admissions between 1 April 1997 and 31 March 2005 with a mention of “assault by a sharp object” (ICD10 X99) on the admission record were extracted from the Department of Health’s hospital episode statistics database (HES) and analysed using STATA. Any record that also recorded a contradictory code such as accidental injury was excluded from the analyses.

Results: Preliminary analyses showed that the overall number of patients admitted to hospitals in England with assault related injuries from sharp objects during the study period had increased by 30%, from 3770 patients in 1997/8 to 4891 in 2004/5. Case fatality was 0.5%. Peak days of admission occurred at the weekend (42% of total admissions were on a Saturday or Sunday). Males accounted for 90% (males 30 464; females 3406) of the admissions between 1997 and 2005: 89% (27 128) of the male admissions were aged between 15 and 44 years. A slightly lower proportion of females, 82% (2790), were aged between 15 and 44 years. The mean age at admission for males was 29 years and for females 31 years. Analysis of primary and secondary diagnosis showed that 49% (14 786) of the admissions among men were for injuries to the head, neck, or thorax. The equivalent figure for women was 41% (1383).

Conclusions: These data show a marked rise in hospital admissions following assaults with sharp objects. More detailed research is required to quantify the burden on other health care resources such as accident and emergency departments, to identify the main weapons used in these assaults (ICD10 coding does not permit this), to develop appropriate public health responses to these trends, and to assess the impact these serious assaults have upon people’s lives.

62 DOES IT MATTER WHO TAKES YOUR CALL? THE EFFECT OF ATTITUDE TO RISK ON TELEPHONE TRIAGE DECISIONS

A. O’Cathain1, J. F. Munro1, I. Armstrong2.1University of Sheffield, Sheffield, UK; 2NHS 24

Background: Variation in practice is common among health professionals working in all care settings, and has also been shown in the assessments made by telephone triage nurses. For example, there is a threefold variation in the proportion of calls assessed to self care in NHS Direct, the national nurse telephone triage service for England. Health professionals’ attitude to risk has been proposed as one potential explanation of such variation, and we sought to examine this in telephone triage decisions.

Methods: NHS 24 is the national telephone triage service for Scotland, offering assessment and information on health problems to the general public. All nurses providing telephone assessment were sent a questionnaire about their attitudes to risk. This included an existing five item instrument measuring risk tolerance in clinical decision making, and items generated from qualitative interviews about competing risks within telephone triage. The response rate was 57% (265/464). Records of calls assessed by these nurses in a six month period in 2005 were identified, totalling 236 887 calls.

Results: According to the five item instrument, 59% of nurses had a “no risk taking” attitude to clinical decision making. There was variation in attitudes to risk among the nurses. For example, 27% of nurses (71/262) strongly agreed that an NHS 24 nurse “must not take any risks with physical illness”, while 17% (45/262) disagreed. There was also variation in nurses’ attitudes towards the competing risks of overloading busy services versus missing a serious illness, with 24% strongly agreeing and 11% disagreeing that “it was important not to overload busy services”. Similarly there was variation in competing risks of relying on the expertise of the computerised decision support software in NHS 24 versus the expertise of the health professional: 8% of nurses strongly agreed/agreed and 14% strongly disagreed that “the most important thing was to follow the algorithms of the software”.

Conclusions: Risk taking attitudes in telephone triage nurses were similar to British doctors—59% had a “no risk taking” attitude compared with 54% of doctors. There was variation in nurses’ attitudes to risk. The effect of attitude to risk on the proportion of calls triaged to self care will be examined using multilevel modelling. The amount of variation in triage decisions which is explained by nurses’ attitude to risk will be presented.

63 HAS NHS DIRECT WALES EASED PRESSURE ON OTHER IMMEDIATE CARE SERVICES?

W. Y. Cheung1, A. Watkins2, H. Snooks1, J. Peconi1.1CHIRAL, School of Medicine, Swansea University; 2European Business Management School, Swansea University, UK

Background: NHS Direct (NHSD) was set up to signpost callers to appropriate services and promote self care. It was hoped that in this way the new service would ease pressure on other emergency care services. Research undertaken following implementation in England did not find any associated changes in demand for other immediate health care services. However, with increasing awareness and volume of calls to NHSD over time, the longer term impact of the service on other emergency and unscheduled care services needs to be investigated.

Objective: To measure the impact of NHSD Wales (NHSDW) on demand for other immediate care services.

Methods: Monthly attendance data were requested from all accident and emergency (A&E) departments (n = 20), GP out of hours (OOH) cooperatives (n = 25), and the Welsh Ambulance Services Trust from June 1999 to December 2003. Standard time series techniques were used to measure changes in trends in volume of contacts/attendances following introduction of NHSDW in June 2000.

Results: 15 A&E departments provided data on first attendances. Aggregate analysis showed an initial downward trend in the data (b1 = −0.00161, p = 0.139) becoming an upward trend following the introduction of NHSDW (b2 = 0.002979, p = 0.015). An average monthly decrease of 0.16% in total A&E first attendances became a monthly increase of 0.13%. Sixteen GP OOH cooperatives provided some data on total out of hour calls but only four provided evaluable data (no pre-NHSDW data: 5; major changes to their catchment areas: 6; not able to provide monthly data: 1). Three showed initial downward trends (b1 = 0.0138 to −0.003) which were slowed down or reversed after the introduction of NHSDW (b2 = 0.0022 to 0.0121). Data were received from all three ambulance service regions. Aggregate analysis showed an initial downward trend in the data (b1 = −0.00377, p = 0.034), becoming an upward trend following the introduction of NHSDW (b2 = 0.010841, p<0.0001). An average monthly decrease of 0.37% in 999 calls became a monthly increase of 0.7%.

Conclusions: This study has found no evidence of any substitution of demand between NHSDW and other immediate care services. Results suggest that the service has been associated over time with increased demand in other parts of the system.

Methods II

64 MEASUREMENT OF NEIGHBOURHOOD SOCIAL COHESION IN CAERPHILLY COUNTY BOROUGH: AN ECOMETRIC ANALYSIS

D. M. Farewell, D. L. Fone, F. Dunstan.Department of Epidemiology, Statistics and Public Health, Centre for Health Sciences Research, School of Medicine, Cardiff University, Cardiff, UK

Background: It is widely believed that the social environment has an important influence on health, but there is less certainty about how to measure specific factors within the social environment that link the neighbourhood of residence to a health outcome. In this paper we apply ecometric methodology to analyse question items purporting to relate to social cohesion.

Methods: We use data from 12 092 participants in the Caerphilly Health and Social Needs Study, who were sampled from within 325 enumeration districts. The responses of interest come from 15 question items designed to capture different facets of neighbourhood cohesion. Using a multilevel ecometric model, we decompose the variability present in these ordinal (Likert) responses into contextual, compositional, item-level and residual components.

Results: There was important variability at each of the aforementioned levels of the multilevel model. This is welcome at the contextual and item levels but a ubiquitous and well recognised phenomenon at the individual level. Nevertheless, given the large sample size, we can reliably estimate contextual social cohesion, which we propose to use as an explanatory quantity in future studies of mental health.

Conclusions: Our study is the first to apply multilevel ecometric principles to describe both the usefulness and limitations of the social cohesion items in question. We establish that the items allow fine discrimination between individuals and that contextual variability in social cohesion exists, but that large sample sizes are needed in order to accurately estimate these quantities.

65 EVALUATING THE MACRO-ECONOMIC IMPACT OF INFECTIOUS DISEASE

M. R. Keogh-Brown, R. D. Smith.School of Medicine, Health Policy and Practice, University of East Anglia, Norwich, UK

Background: There is a positive relation between a nation’s health and economic prosperity. In evaluating health care, economists typically concentrate on economic impacts to the health sector only, which may miss specify the social costs and benefits of disease/intervention. Severe acute respiratory distress syndrome (SARS) illustrated the potential for a health problem to have an impact on non-health sectors, such as trade and tourism, many times the magnitude of effects on the health sector. This paper shows the potential of using a macro-economic approach to model a major disease outbreak.

Objective: To demonstrate the importance of undertaking a macro-economic assessment of the impact of health problems, using infectious disease as a case study.

Methods: Using data from the 1918 and 1968 flu outbreaks and other sources, we estimated various potential shocks on the UK economy reflecting the impact of an infectious disease outbreak more generally. Aside from the obvious health sector impacts, this approach captures changes in behaviour that result from fear of disease, for example, avoiding public transport and densely populated gatherings. These shocks were implemented within a macro-economic model of the UK.

Main outcome measures: The economic impact of an infectious disease outbreak was calculated as percentage impacts on macro-economic indicators such as household income, government transfers, tax revenues, unemployment, household utility, real gross domestic product, inflation, health care, and social services.

Results: This is a work in progress and results will be presented at the meeting. Preliminary analysis suggests there may be significant reductions in sectors such as retail sales and tourism during infectious disease outbreaks. However, as outbreaks are short sharp shocks, productivity increases rapidly in sectors that declined during the outbreak, creating strong post-outbreak gains that may outweigh the previous quarter’s losses, resulting in relatively small long term losses, though some impact on GDP, growth, and FDI may be shown in annual figures.

Conclusions: Our modelling demonstrates the usefulness of undertaking a macro-economic approach in assessing health care issues likely to have far wider economic effects than those identified to the health care sector. This approach cannot replace micro-economic analysis/modelling, but in the case of a global health threat—where many affected individuals may not be admitted to hospital and where the greatest impacts on a country’s economy do not occur in the health care sector—macro-economic modelling is the preferable method of effect estimation.

66 WHAT DO MISSING DATA TELL US? USE OF MULTIPLE IMPUTATION TO REDUCE SELECTION BIAS IN MODELLING THE ASSOCIATION OF THROMBOTIC MARKERS WITH INCIDENT CORONARY HEART DISEASE: BRITISH WOMEN’S HEART AND HEALTH PROSPECTIVE COHORT STUDY

M. May1, D. A. Lawlor1, R. Patel1, A. Rumley2, G. Lowe2, S. Ebrahim3.1Department of Social Medicine, University of Bristol, Bristol, UK; 2Division of Cardiovascular and Medical Sciences, University of Glasgow, Glasgow, UK; 3Department of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, London, UK

Background: In the search for risk factors for coronary heart disease (CHD) it is important to consider whether associations are confounded by socioeconomic position and lifestyle. Typically a series of regression models are fitted to the data to produce estimates of risk factor–disease association that show attenuation of effect with incremental adjustment for confounding factors. For comparability of estimates, regressions must be fitted on the same set of study participants. As some data are likely to be missing on most variables, regression models with many covariates fitted using subjects with complete data often use markedly reduced subsets of the original population potentially leading to selection bias. Multiple imputation of missing data ensures that all subjects remain in the analyses.

Objective: To demonstrate the value of multiple imputation in providing unbiased, precise estimates using the associations of three thrombotic markers—von Willebrand factor antigen, D-dimer, and tissue plasminogen activator antigen—with incident CHD as an example.

Design: Prospective cohort study set in 23 towns in the UK.

Participants: 3582 women aged 60–79 years and were free of CHD at entry into the British Women’s Heart and Health Study.

Methods: Cox regression models were fitted to 25 imputed datasets and estimates appropriately combined. These were compared with analyses restricted to the complete data subset.

Results: In models adjusting for age and town only, there was no association between von Willebrand factor or D-dimer and incident CHD, but there was a positive association of tissue plasminogen activator: hazard ratio (HR) per log unit was 1.58 (95% confidence interval (CI), 1.11 to 2.24) using imputed data (n = 3582, 198 CHD events) and 1.90 (1.17 to 3.07) using the complete data subset (n = 1830, 93 events). With adjustment for potential confounding factors (socioeconomic position from across the life course, smoking, lung function, physical activity, alcohol consumption, body mass index, and waist–hip circumference) this association was attenuated to 1.30 (0.88 to 1.93) imputed data and 1.61 (0.95 to 2.74) complete data subset. Further adjustment for risk factors that may be part of the same pathophysiological process linking this risk factor to CHD (HDLc, triglycerides, blood pressure, fasting glucose, insulin, C reactive protein, and fibrinogen) attenuated the HR to 1.06 (0.69 to 1.63) and 1.30 (0.72 to 2.33), respectively.

Conclusions: In this population only tissue plasminogen activator antigen showed an association with CHD, which was attenuated towards the null by adjustment for potential confounding factors. Selection bias in models restricted to complete data subset inflated estimates of associations.

67 ADMINISTRATIVE AREAS: AN APPOSITE TOOL FOR NEIGHBOURHOOD RESEARCH?

M. J. Kelly, F. D. Dunstan, D. L. Fone.Department of Epidemiology, Statistics and Public Health, Centre for Health Sciences Research, Cardiff University, Cardiff, UK

Background: Studies investigating the effect of area of residence on individual outcomes routinely employ administrative areas to act as proxies for neighbourhoods. The validity of this approach has been questioned. An alternative method is to create new contiguous areas by grouping together smaller adjacent administrative areas (enumeration districts) based on the socioeconomic and demographic characteristics of their residents. These new synthetic areas are more internally homogeneous and should improve the efficiency of the analysis.

Objective: To investigate the suitability of administrative boundaries for modelling area effects on mental health by comparing with the use of synthetic areas.

Design: The study data came from the Caerphilly Health and Social Needs survey. Synthetic areas were created by comparing the composition of adjacent enumeration districts (EDs) in terms of various socioeconomic characteristics. Adjacent EDs were merged if their compositions were similar enough, as determined using Wilcoxon effect sizes. This process was applied iteratively, producing a set of synthetic boundaries. As no gold standard is available, the two approaches (administrative and synthetic) were compared with respect to internal homogeneity, ICC coefficients, penalised likelihoods, and estimated covariate coefficients. The relation was examined in a multilevel model as well as a Bayesian framework. In the former the variability was split between the levels of the hierarchy (individual, enumeration district, and electoral/synthetic ward), allowing the assessment of the influence of each context on mental health. The Bayesian model used an ED adjacency matrix to split the variability into spatial and heterogeneity components.

Subjects and setting: 10 892 persons aged 18 to 74 residing in Caerphilly County Borough, Wales in 2001.

Main outcome measures: Individual mental health, assessed by the mental health subscale of the SF-36 (MHI-5), is the response in the multilevel and Bayesian models. The final outcome was a set of synthetic boundaries.

Results: Initial results indicated that the within-area variability was substantially reduced, and that the synthetic area ICC coefficient for the highest level was 10% larger than the equivalent administrative ICC. Also, the heterogeneity component from the Bayesian model was smaller for the synthetic areas than for administrative ones.

Conclusions: Initial investigations indicate that the use of synthetic areas is preferable to the use of administrative ones. The benefits to be gained, however, may be limited by the large size of the basic building blocks. Smaller areas, such as the 2001 census defined output areas, would be preferable.

68 WHO GET MISSED OUT IN A LONGITUDINAL STUDY BASED ON THE CENSUS?

D. O’ Reilly, M. Rosato, S. Connolly.Department of Epidemiology and Public Health, Queen’s University Belfast, Mulhouse Building; Royal Group of Hospitals, Grosvenor Rd, Belfast, UK

Background: Longitudinal studies based on the census are very powerful and efficient ways of studying social phenomena. However, as with all follow up designs, quality is in part related to the completeness of the record linkage, though this is infrequently analysed to any extent. The backwards linkage of all deaths to Northern Ireland residents during the four post-census years to the census records provides an opportunity to describe those that are left out of this linkage process.

Setting: Northern Ireland population

Methods: A comparison of the matched and unmatched death records was undertaken using multivariate logistic regression. The subject attributes were as recorded on the death certificate. The postcode was used to ascribe the relative affluence and population density of the super-output area where they lived.

Results: Of the 54 085 deaths during this period, 3368 (6.0%) could not be matched to a census record. The unmatched rate was highest for young adults; males (fully adjusted OR = 1.18 (95% confidence interval, 1.15 to 1.33)); those who are unmarried (ORs = 2.49, 1.90, and 2.62 for never married, widowed, and separated/divorced); for deaths in nursing or residential homes (OR = 1.73 (1.58 to 1.89)), and in the most deprived quintile (OR = 1.79 (1.59 to 2.02)). There were significant differences in the distribution of causes of death between the matched and unmatched records. Those with matched records were less likely to have died from cardiovascular disease or from cancer, and more likely to have died from external causes (such as accidents and suicide) and from “other causes”. Adjustment for sociodemographic factors, place of death, and area characteristics attenuated but did not eliminate these differences.

Conclusions: The non-linkage of death and census records probably reflects a combination of non-enumeration at the time of the census and deficient information related to the deceased at the time of death. People who are missed out in the matching process are perhaps more vulnerable or socially excluded with the result that findings based on the matched records may slightly understate the social gradients.

Occupational health and military health

69 WORK RELATED INFECTIOUS DISEASE: A COMPARISON OF INFORMATION COLLECTED BY PHYSICIANS REPORTING TO THE HEALTH AND OCCUPATION REPORTING NETWORK (THOR) WITH DATA FROM THE HEALTH PROTECTION AGENCY (HPA)

A. Bolton1, I. Gillespie2, S. O’Brien1, S. Turner1, A. Slovak1, M. Carder1, S. Lines1, R. Parker1, L. Fulluck1, R. Agius1.1Division of Epidemiology and Health Sciences, University of Manchester, Manchester, UK; 2Health Protection Agency, Centre for Infections, Colindale Avenue, London, UK

Background: Infectious diseases are an important cause of work related illness and result in self reported ill health, non-attendance at work, and referrals to clinicians, with obvious socioeconomic consequences. Information collected by schemes such as THOR and the HPA are invaluable in investigating those at high risk (for example, specific groups of employees), and are investigated for areas of commonality and difference.

Aims: To examine work related infectious diseases as reported to two schemes—THOR and the HPA—and to identify areas of strengths and weakness in these schemes.

Methods: Increasing use of electronic databases allowed for record linkage and checking of completeness and accuracy within and between data sources. Population characteristics such as infecting organism, place, and time were used for matching. Capture–recapture (CR) statistical methods were used, which rely on the two data sources being independent of each other, and all individuals having an equal probability of being captured. Estimates of numbers of unreported cases of disease were made by comparing data originating from either THOR or the HPA.

Results: Between 2002 and 2004, 4448 estimated cases of infectious disease were reported to THOR, the majority (86%) being diarrhoeal disease. In the diarrhoeal disease category, the most frequently reported industrial sectors were social care (46%), health care (31%), and hotels and catering (5%), with the causal agents most frequently cited being norovirus and SRSV. Early analysis of HPA data shows that occupational associations were less clearly defined, but that the health and social care and the hotel and restaurant sectors were identifiable as sources of additional information about work related infectious disease.

Conclusions: Even assuming underreporting of cases, data on infectious diseases collected by THOR and the HPA are important in identifying workers at risk of infectious disease, and could be used in planning country-wide intervention programmes.

70 TRENDS IN WORK RELATED DISEASE IN THE UK: 1996–2004

R. McNamee1, M. Carder2, Y. Chen1, R. Agius1, S. Turner1.1Biostatistics Group; 2Centre for Occupational and Environmental Health, University of Manchester, Manchester, UK

Background: Time trends in incidence of work related disease can indicate intervention efficacy or emergence of new problems. Estimation of time trends may be possible from surveillance schemes with well defined reporter bases, even if coverage is incomplete.

Aims: To estimate UK trends in incidence of work related skin diseases, respiratory diseases, musculoskeletal disorders, and mental disorders during 1996 to 2004, we used data from three UK-wide surveillance schemes—EPIDERM (Occupational Skin Surveillance), SWORD (Surveillance of Work-Related and Occupational Respiratory Disease) (1999–2004 only) and OPRA (Occupational Physicians Reporting Activity)—currently run by the University of Manchester THOR (The Health and Occupation Reporting) network.

Methods: Reporters, either clinical specialists or occupational physicians, described new cases of disease which in their opinion were work related, either every month or one month a year. Variation in disease incidence over time was estimated from a multilevel statistical model which controlled for type of reporter, short term seasonal variation, and a “new reporter effect”.

Results: The estimated changes in incidence of work related skin diseases (mainly contact dermatitis) assuming a systematic trend over 1996 to 2004 were: −2.6% per year (95% confidence interval, −3.9 to −1.3) based on 11 071 cases reported by clinical specialists, and −10.3% per year (−13.8 to −6.8) using the 1242 cases from occupational physicians. During 1999 to 2004, there were 6377 cases of respiratory disease reported by clinical specialists and 279 reported by occupational physicians. The annual changes for specific diseases were: asthma, −7.8% (−13.1 to −2.1); mesothelioma, −0.6% (−5.1 to 4.2); non-malignant pleural disease, 3.7% (−0.4 to 8.0); pneumoconiosis, −6.9 (−12.6 to −0.7); and “other” respiratory diseases, 2.2% (−4.6 to 9.5). There was no consistent pattern of change for musculoskeletal diseases, the annual change being 1.5% (−1.7 to 4.8) based on 3758 cases. There was an upward trend in “stress and mental disorders” estimated at 14.6% per year (11.1 to 18.2), based on 2654 cases.

Conclusions: The extent to which these results reflect genuine trends in the burden of work related disease in the UK is discussed in the light of separate studies of reporter behaviour.

71 OCCUPATIONAL EXPOSURE TO PESTICIDES AND SOLVENTS AND RISK OF BRAIN TUMOURS IN ADULTS

S. J. Hepworth1, A. Bolton2, K. Muir3, P. A. Mckinney1, M. J. Van Tongeren2.1Centre for Epidemiology and Biostatistics, University of Leeds, Leeds, UK; 2Centre for Occupational and Environmental Health, University of Manchester, Manchester, UK; 3Division of Epidemiology and Public Health, University of Nottingham, Nottingham, UK

Background: Each year in the UK there are over 4000 new cases of brain cancer but little is known about the aetiology of this disease. The UK Adult Brain Tumour Study was instigated to investigate possible risk factors including occupational exposure to pesticides and solvents.

Methods: This population based case–control study covered central Scotland, the West Midlands, West Yorkshire, and the Trent region. Occupational pesticide and solvent exposure data were collected from 244 gliomas, 167 meningiomas, and 85 acoustic neuromas and over 1000 controls. Information was obtained face to face using a computer assisted personal interview and included questions about hours of work, percentage of time carrying out different tasks, and frequency of exposure to individual types of chemicals, which were then used to assess intensity of exposure.

Results: Occupational exposure to any pesticide was not found to increase the risk of glioma (OR = 1.0 (95% confidence interval, 0.5 to 1.9), meningioma (OR = 0.7 (0.2 to 2.2), or acoustic neuroma (OR = 0.8 (0.2 to 2.7)). Investigations by individual pesticide type (insecticide, herbicide, fungicide, and wood treatment) did not lead to a significant association with any of the three investigated brain tumour diagnoses. No increased risk was found for occupational use of solvents overall (glioma, OR = 1.0 (0.7 to 1.4); meningioma, OR = 0.9 (0.6 to 1.3); acoustic neuroma, OR = 0.6 (0.4 to 1.1)) nor when high, medium, and low exposures were considered for several solvent categories (aliphatic hydrocarbons, aromatic hydrocarbons, chlorinated hydrocarbons, alcohols, and ketones).

Conclusions: Results from this large population based case–control study did not find any association between occupational exposure to pesticides and solvents and risk of brain tumours in adults.

72 THE HEALTH OF UK MILITARY PERSONNEL WHO DEPLOYED TO THE 2003 IRAQ WAR

M. Hotopf1, L. Hull1, T. Browne1, N. Fear2, O. Horn1, A. Iversen1, M. Jones1, D. Murphy1, D. Bland1, M. Earnshaw2, N. Greenberg1, J. Hacker-Hughes2, R. Tate3, C. Dandeker4, R. Rona5, S. Wessely1.1King’s Centre for Military Health Research; 2Academic Centre for Defence Mental Health; 3Department of Biostatistics and Computing, the Institute of Psychiatry; 4Department of War Studies, King’s College London; 5Division of Asthma, Allergy and Lung Biology, King’s College London, UK

Background: Research indicates that veterans returning from military deployments are at risk of both mental and physical illness. Higher rates of multiple physical symptoms and psychiatric disorders such as depression, anxiety, and post-traumatic stress disorder (PTSD) have been reported in many epidemiological studies following deployment. However, systematic epidemiological research has not previously been performed until many years after deployments have concluded. This has led to a range of methodological difficulties. High quality information on health outcomes following deployment is necessary in order to plan health services for veterans. This study aimed to examine the psychological health of British armed forces personnel who deployed to the 2003 Iraq War at the earliest possible opportunity.

Design: We carried out a retrospective cohort study of a random sample of 7695 UK Armed Forces personnel who were deployed to the 2003 Iraq War (TELIC group). These were compared with 10 003 personnel who were in the UK Armed Forces at the same time but were not deployed (Era group). The sample included Regular and Reserve personnel. Participants were sent a comprehensive questionnaire which included items relating to service information and deployment experiences. In addition, measures of psychological morbidity (General Health Questionnaire (GHQ-12), fatigue symptoms (Chalder Fatigue Scale), symptoms of post-traumatic stress (Post-traumatic Checklist (PCL-Civilian version), and alcohol related illness (AUDIT) were assessed. A ¾ cut off was used to define caseness on the GHQ and Fatigue measures. PTSD caseness was defined as scoring 50 or above on the PCL-C. A checklist of 53 symptoms was included, and participants were considered multisymptomatic if they endorsed 18 or more. Participants were also asked to rate their own perception of their current health (from excellent to poor).

Results: 10 272 questionnaires (58.7%) were returned completed. There was a modest increase in multiple physical symptoms in the TELIC group (odds ratio(OR) = 1.33 (95% confidence interval, 1.15 to 1.54)). No other differences between groups were found. The effect of deployment was different for regulars and reservists. For regulars, only multiple symptoms were weakly associated with deployment (OR = 1.32 (1.14 to 1.53)). In contrast for reservists, deployment was associated with psychological symptoms (GHQ) (OR = 2.47 (1.35 to 4.52)) and fatigue (OR = 1.78 (1.09 to 2.91)). Combat duties were associated with increased rates of PTSD symptoms (OR = 1.49 (1.05 to 2.13)) and alcohol consumption (OR = 1.19 (1.01 to 1.41)).

Conclusions: There was no evidence that later deployments to Iraq were associated with poorer health outcomes.

73 THE IMPACT OF DEPLOYMENT ON WOMEN’S PSYCHOLOGICAL HEALTH IN THE BRITISH ARMED FORCES

R. J. Rona1,2, N. Fear3, L. Hull1, S. Wessely1.1King’s Centre for Military Health Research; 2Division of Asthma, Allergy and Lung Biology, King’s College London; 3Academic Centre for Defence Mental Health, King’s College London, UK

Objectives: To assess the differences in psychological health between the sexes, the impact of deployment on women’s psychological health and the trends in psychological health overtime in women in the military.

Design: Representative samples of the British Armed Forces in two cross sectional studies in 1997 and 2004. We selected for the analysis all the women in the study and a random sample of 20% of men participating in the study stratified for rank and by deployment to the Persian Gulf War or the Iraq War.

Main outcomes measures: General Health Questionnaire-12 (GHQ-12); 50 common physical symptoms, the Chronic Fatigue Scale, self perception of health (SPH) from the short form-36, post-traumatic stress reaction (PTSR), and weekly units of alcohol intake. The following scores were positive in this analysis: >3 for the GHQ-12 and the Chronic Fatigue Scales, at least 10 physical symptoms, fair or poor SPH, and ⩾21 units of alcohol intake a week.

Results: Altogether, 3105 men and 1591 women were included in the analysis. Women were more likely to score positive to GHQ-12 and Chronic Fatigue Scale in both surveys and physical symptoms in the 2004 survey, but alcohol intake was much higher in men. Women deployed to the Persian Gulf War had more PTSR (odds ratio (OR) = 3.89 (95% confidence interval, 2.06 to 7.35)), GHQ-12 (OR = 2.35 (1.74 to 3.18)), chronic fatigue (OR = 3.51 (2.57 to 4.780), and physical symptoms (OR = 3.80 (2.77 to 5.22)) in comparison with non-deployed women. Such differences were not observed in the 2004 survey related to the Iraq War.

Conclusions: Psychological symptoms were more common in women than men as in civilian populations. Deployment was related to women’s psychological health in the Gulf War but not the Iraq War. There are many possible explanations for these differences in the two deployments such as type of exposure, changes in military preparation to deployment, unit morale, and timing of the survey.

Parallel session D

Lifecourse II

74 SODIUM INTAKE IN INFANCY AND BLOOD PRESSURE AT AGE 7

M. Brion1, A. R. Ness1, G. Davey Smith1, P. Emmett1, I. Rogers1, P. Whincup2, D. A. Lawlor1.1Department of Social Medicine, University of Bristol, Bristol, UK; 2St George’s University of London, London, UK

Background: Infancy may be a sensitive period with respect to the effect of dietary sodium intake on future blood pressure. One follow up of a randomised controlled trial found that blood pressure in adolescence was lower in those who had been randomised to low sodium formulas in infancy compared to those given normal formulas. Only one third of the original trial participants were available for follow up in that study.

Objective: To assess the association between sodium intake in infancy and blood pressure.

Design: Prospective cohort.

Participants: 569 children from the “Children in Focus” subgroup of the Avon Longitudinal Study of Parents and Children (ALSPAC) with sodium intake measures at four months, and 740 with sodium intake measures at eight months.

Outcome measure: Blood pressure measured at age 7.

Results: Geometric mean (95% confidence interval) of sodium intake at four months was 165 (161 to 169) mg/day and at eight months, 532 (513 to 551) mg/day. Sodium intake was greater in girls than in boys and was strongly associated with total energy intake (Pearson’s correlation coefficient = 0.71, p<0.001 at four months) and dietary potassium intake (0.81, p<0.001 at four months) at both ages. After minimal adjustment (age and sex), sodium intake at four months was positively associated with systolic blood pressure (SBP) at seven years (β = 0.85 mm Hg/SD unit (95% CI, 0.06 to 1.64); p = 0.03). This changed little with adjustment for socioeconomic position, maternal age at childbirth, parity, birth weight, gestation, and maternal smoking. However, additional adjustment for body mass index (BMI) at age 7 attenuated the association (β = 0.49 mm Hg/SD unit (−0.29 to 1.26); p = 0.2). When we tried to adjust for total energy or potassium intake in infancy there was strong evidence of colinearity, which made the coefficients uninterpretable. Sodium intake at four months was not associated with diastolic blood pressure (DBP) and intake at eight months was not associated with either SBP or DBP.

Conclusion: There is a specific association between sodium intake in early infancy (four months) and SBP at age seven years. However, the effects of sodium could not be separated from potential effects of potassium and energy intake, and the attenuation of the effect with adjustment for later BMI may reflect an effect of energy intake throughout life. Further randomised controlled trials of sodium intake in infancy with high levels of long term follow up would be required to determine the effect of infancy sodium intake on later blood pressure.

75 ENVIRONMENTAL TOBACCO SMOKE AND BREAST CANCER IN THE MILLION WOMEN STUDY

K. Pirie, A. Roddam, G. Reeves, V. Beral.Cancer Research UK Epidemiology Unit, University of Oxford, Oxford, UK

Objectives: To investigate the association between childhood and adult household exposure to environmental tobacco smoke (ETS) and the risk of developing breast cancer in the Million Women Study.

Design: Population based multicentre prospective cohort study.

Setting: United Kingdom.

Participants: A cohort of 294 469 mainly postmenopausal women, of whom 2551 were diagnosed with an incident breast cancer.

Main outcome measures: Incident cases of breast cancer.

Results: The majority of women included in the study are lifelong non-smokers (56%), with 16% being active smokers at the time of recruitment; 65% of women reported at least one parent smoking at birth, and 73% at age 10. Parental smoking status was unknown for 16% and 7% of women at birth and at age 10, respectively. At study entry, only 14% of women were living with a partner who smoked. There was no association between active smoking status and the risk of developing breast cancer (hazard ratio (HR) = 1.02 for ever v never smokers (95% confidence interval (CI), 0.93 to 1.13)). No associations were found between childhood exposure to ETS (HR = 0.93 (0.82 to 1.06) for either v neither parent smoking at birth or at age 10) or adult household exposure to ETS (HR = 0.95 (0.83 to 1.10) for living with a partner who smokes v not) and the risk of developing breast cancer. Restricting the analysis to lifelong non-smokers and to women who did not drink alcohol did not materially affect the results.

Conclusions: Exposure to ETS either as a child or as an adult is not associated with risk of breast cancer in this large cohort of women in the UK.

76 PARENT-CHILD RELATIONSHIPS AND THEIR IMPACT ON PHYSICAL HEALTH AMONG THE CHILDREN OF THE ALSPAC COHORT

A. Waylen, S. Stewart-Brown.Warwick Medical School, University of Warwick, Warwick, UK

Background: There is good evidence that parent–child relationships play a part in determining mental health in childhood, and evidence is emerging that suggests that these factors may also have an impact on physical health. This study explored links between early measures of parent–child relationships and physical health at seven years of age using prospective longitudinal data from the ALSPAC birth cohort.

Methods: Exposures consisted of self report measures of parental practices and feelings toward the child with respect to hostility, perceptions of intrusion, and hitting and shouting, collected at data sweeps from pregnancy to four years. Outcomes were maternal report of child’s health in general and number of conditions experienced over the last year. Financial difficulties, maternal age, sex of the child, and housing tenure were considered as confounders, whereas maternal mental and physical health, partner relationship, and child behaviour were taken as mediators. Analyses were conducted using Intercooled STATA 9.0. Ologit multinomial regression (for the ordinal outcome, health in general) and Poisson regression (for the number of conditions), taking account first of sociodemographic confounders and subsequently of potential mediators.

Results: These aspects of parent–child relationships were common, affecting 62% of the sample to some degree for hostility, 80% for intrusion, and 83% for hitting and shouting. After adjustment for sociodemographic factors, the odds of poor health among children exposed to some maternal hostility were 1.30 (95% confidence interval, 1.15 to 1.47). Among those exposed to high levels they were 1.47 (1.33 to 1.66). The respective odds for children who were perceived by their mothers to intrude upon their space and time were 1.53 (1.33 to 1.76) and 1.89 (1.55 to 2.31). These aspects of relationships also predicted the number of different childhood conditions reported during the previous year. Hitting and shouting predicted number of conditions, but not health in general. Maternal health rows with partners and children’s behaviour were significant mediators of these relationships.

Conclusion: The results of analyses based on self report measures of parenting and maternal assessment of child health are less robust than those based on objective measures and may be subject to bias. The total variance in child health explained by these aspects of parenting was small. However, small increases in risk can be important for population health if, as is the case with these risk factors, exposure is common. Our results add weight to the emerging body of research suggesting that evidence based parenting interventions, provided on a universal basis, could have a useful impact on children’s physical as well as mental health.

77 USE OF OFFSPRING BODY MASS INDEX AS AN INSTRUMENTAL VARIABLE SUGGESTS THAT THE CAUSAL EFFECT OF INCREASED BMI ON MORTALITY IS UNDERESTIMATED IN CONVENTIONAL OBSERVATIONAL STUDIES

G. Davey Smith1, J. A. C. Sterne1, P. Tynelius2, D. A. Lawlor1, F. Rasmussen2.1Department of Social Medicine, University of Bristol, UK; 2Department of Public Health Sciences, Karolinska Institute, Stockholm, Sweden

Background: Epidemiological studies often demonstrate U shaped or inverse associations between body mass index (BMI) and mortality, which has led some commentators to claim that increasing BMI levels in the population are not a serious public health issue. Reverse causation—illness leading to lower BMI, which in turn leads to lower BMI being associated with elevated mortality—is a key problem in observational studies. We used offspring BMI as an instrumental variable to estimate causal associations of BMI and mortality purged of the reverse causation problem. While illness after birth of an offspring will lead to lower BMI in an individual, it will not influence BMI in their offspring. Offspring BMI will not have a direct causal relation on parental mortality, and any association with parental mortality will only be through its relation to the parents’ own BMI. Thus it provides a valid instrument for assessing the causal association between BMI and mortality.

Methods: BMI measured in an offspring at military conscription was available for over one million mothers and fathers in Sweden, and over the follow up period there were 242 126 paternal and 122 677 maternal deaths. BMI in both fathers and offspring was available for 72 815 father–offspring pairs, with 2030 paternal deaths. We related offspring BMI to cause specific mortality for the parents in the entire sample and used the father–offspring pairs with BMI on both in a full instrumental variables analysis.

Results: Offspring BMI was strongly positively related to mortality from cardiovascular disease (CVD), diabetes, and kidney cancer for mothers and fathers. Adjustment for socioeconomic indicators had little influence on the findings. In the father–offspring pairs a standard deviation higher paternal BMI was associated with a hazard ratio of 1.47 (95% confidence interval, 1.32 to 1.64) for CVD mortality; an instrumental variable analysis (using offspring BMI) estimated a causal effect of 1.82 (1.17 to 2.83) for the same difference in paternal BMI. For fathers there was little association of offspring BMI with mortality from accidents and violence or lung cancer in adjusted analyses, suggesting that confounding by socioeconomic and behavioural risk factors was not strongly influential.

Conclusions: Associations between BMI and mortality are underestimated in naive analyses of observational studies, probably because of reverse causation. Instrumental variable analysis suggests considerably larger causal effects of higher BMI on mortality risk. Public health implications of the trend to higher BMI in the population would be underestimated if naive analyses are accepted as reflecting causal effects.

Mental health

78 BENCHMARKING MENTAL HEALTH SERVICE IN LONDON

P. De Ponte1, G. Hughes2, B. Jacobson1.1London Health Observatory, London; 2London Development Centre, London, UK

Background and objectives: The benchmarking exercise is part of an ongoing effort to gain a better understanding of the variation in the provision of mental health services in London. The specific objectives were: to explore the relation between activity based on where services are provided, and where service users are resident, and to assess the impact of this on the factor analysis model; to develop a method for comparing and benchmarking boroughs and trusts; and to make recommendations for further development.

Methods: The project used a factor analysis model developed by McCrone et al (in press) to describe the sociodemography of areas through a small number of factors. These factors were used in a regression analysis to explain variation in activity between boroughs. Data were collected on a range of inpatient and community mental health service variables from London’s 10 mental health provider trusts for working age adults for the period April 2004 to March 2005.

Results: As in a previous benchmarking exercise (McCrone and Jacobson, 2004), the factor analysis model explained more of the variation between boroughs than other commonly used indices for mental health need. However, for some variables only a small proportion of the variation was explained by the factor analysis model. It was hypothesised that the variation not explained by the model was due not to sociodemographic factors or demand factors but to supply side or service provision factors such as clinical practice, service design, access to services, and use of private facilities.

Conclusions: For some aspects of service activity, this model allowed “outliers” to be identified where activity differed most from what might be expected given their socio-economic profile and to compare boroughs within London whilst taking account of the influence of these factors. This aspect of the methodology could be useful for reviewing use of resources and variations in performance across London. The paper makes recommendations for mainstreaming the benchmarking exercise, outlining the potential of complimenting routinely collected data with detailed data collected directly from services to look at full service models.

79 MEASURING THE MULTIDIMENSIONAL NEIGHBOURHOOD USING UK BENEFITS DATA: A MULTILEVEL ANALYSIS OF MENTAL HEALTH STATUS

D. L. Fone1, K. Lloyd2, F. Dunstan1.1Department of Epidemiology, Statistics and Public Health, Centre for Health Sciences Research, Cardiff University Cardiff, UK; 2School of Medicine, Swansea University, Swansea, UK

Background and objective: The evidence from multilevel research investigating whether the places where people live have an influence on their mental health remains inconclusive. The objectives of this study were first to derive small area level, or contextual, measures of the local social environment, using benefits data from the Department of Work and Pensions (DWP), and second to investigate (1) whether the mental health status of individuals is associated with contextual measures of low income, unemployment, and disability, after adjusting for personal risk factors for poor mental health; and (2) whether the associations between mental health and the contextual measures vary significantly between different population subgroups.

Design: Multilevel analysis of cross sectional data from the Welsh Health Survey 1998. The means tested benefits data available were the income support and income-based job seekers allowance. The non-means-tested benefits were incapacity benefit and severe disablement allowance (combined for analysis); and the disability living allowance and attendance allowance (combined for analysis). Indirectly age standardised electoral division ratios were calculated as the contextual measures.

Subjects and setting: 24 975 individuals aged 17 to 74 years living within 833 electoral divisions and 22 unitary authorities in Wales, UK.

Main outcome measure: The Mental Health Inventory (MHI-5) of the Short Form 36 (SF-36) health status questionnaire, modelled as a continuously distributed variable.

Results: Each contextual variable was significantly associated with individual mental health status after adjusting for individual risk factors. The effect sizes for the means tested benefits were: income support, −0.999 (standard error (SE) 0.128); job seekers allowance, −0.863 (SE 0.145). These effect sizes were similar to the Townsend score (−0.931 (SE 0.129)), reflecting their proxy measurement of material and social deprivation. The non-means-tested benefits that were proxy measures of economic inactivity and disability showed substantively stronger effects on individual mental health than the means tested benefits: incapacity benefit and severe disablement allowance, −1.405 (SE 0.146); disability living allowance and attendance allowance, −1.387 (SE 0.151). Cross level interactions showed that all contextual effects were significantly stronger in people who were economically inactive from permanent sickness or disability, with effect sizes for the interactions larger than the main effects.

Conclusions: This study provides evidence for substantive effects of where you live on mental health, and in particular the importance of economic inactivity and disability. DWP benefits data offer a new starting point to hypothesise and investigate possible causal pathways from context to individual mental health status.

80 ASSESSMENT AND AFTERCARE OF DELIBERATE SELF HARM PATIENTS PRESENTING TO A&E DEPARTMENTS

E. Arensman1, P. Corcoran1, I. J. Perry2.1National Suicide Research Foundation, Cork, Republic of Ireland; 2Department of Epidemiology and Public Health, University College Cork, Republic of Ireland

Background: Deliberate self harm (DSH) is a significant risk factor for suicide. There is increasing awareness of the need to standardise the methods of assessment and management of patients presenting to the health system following deliberate self harm.

Objectives: To examine recommended aftercare among DSH patients presenting to accident and emergency (A&E) departments in Ireland; and to examine differences in recommended aftercare among DSH patients with regard to geographical area, sex, method of DSH, and repeater status.

Methods: Through the National Registry of Deliberate Self Harm (National Parasuicide Registry), information was obtained in 2003 on 11 200 hospital presentations following DSH. Using a standard monitoring form, information was recorded on age, sex, method of DSH, and recommended aftercare.

Results: Drug and other poisonings more often led to general admission (55–59% v 21–32%, p<0.01). DSH patients engaging in self cutting were twice as likely to be discharged following treatment in A&E (50% v 23%, p<0.01). Direct psychiatric admission was three times more common following attempted hanging and drowning (33% v 11%, p<0.001). However, 31% were discharged from A&E following attempted hanging, and 25% following attempted drowning. There was marked regional variation in the next care recommended, ranging from 47% being discharged from A&E in the Eastern region v 9% in the South Eastern region (p<0.001).

Conclusions: The findings indicate a lack of standardised assessment and management procedures for DSH patients, which may reflect the availability of resources and variation in quality of care.

81 PSYCHOLOGICAL DISTRESS IN PATIENTS WITH DIABETES

M. M. Collins, I. J. Perry.Department of Epidemiology and Public Health, University College Cork, Republic of Ireland

Objectives: To identify the prevalence and major determinants of anxiety and depression in patients with diabetes.

Methods: We carried out a cross sectional study of 2049 people with types 1 and 2 diabetes, selected from patients experiencing three different models of care in different regions of Ireland: traditional mixed care; GP/hospital shared care; and structured GP care. Psychological distress was assessed with the Hospital Anxiety and Depression Scale (HADS), designed to measure the cognitive symptomatology of depressed mood and anxiety. Factors associated with HADS scores above prespecified thresholds were assessed using binary logistic regression with adjustment for relevant confounders.

Results: The overall response rate was 71% (n = 1456). Approximately 28% of the patients reported HADS scores consistent with mild to severe levels of anxiety and 20% reported scores consistent with mild to severe levels of depression. Diabetes complications and smoking were significantly associated with both higher anxiety and high depression scores in multivariate analyses. Female sex and being an ex-drinker were associated with higher anxiety scores only, while heavy drinking was associated with higher depression scores.

Conclusions: The prevalence of anxiety and depression in patients with diabetes is higher than in general population samples and is increased in patients with complications. The HADS instrument should be considered as a potential screening tool for clinicians in the assessment of psychological distress in patients with diabetes.

Cancer

82 TRENDS IN CANCER INCIDENCE AND MORTALITY IN THE UK 1993–2002

L. Fairley1, C. Smith2, R. Yates2, D. Forman1,3, on behalf of the UK Association of Cancer Registries..1Northern and Yorkshire Cancer and Information Service, Arthington House, Cookridge, Leeds, UK; 2Information Department, Cancer Research UK; 3Centre for Epidemiology and Biostatistics, University of Leeds, Leeds, UK

Background: The availability of comprehensive cancer registration data for the entire UK since 1993 makes possible the examination of comparable 10 year cancer incidence and mortality trends across the country. Restriction to the age group 35 to 69 years provides a focus on the trends in adult cancers of most relevance to future decades.

Aim: To examine trends in cancer incidence and mortality for men and women aged 35 to 69 years in the UK for all cancers and the four most common sites between 1993 and 2002.

Methods: UK incidence and mortality data for men and women were obtained from Cancer Research UK for the most common cancer sites for each sex for persons aged between 35 and 69 years. Age standardised rates were calculated for men and women separately for each cancer site, for both incidence and mortality.

Results: In 1993 there were 54 732 incident cases of cancers in men and 60 010 in women; corresponding cancer deaths were 32 804 and 28 490, respectively. The number of incident cases rose to 59 488 in men and 65 304 in women by 2002, while the number of cancer deaths decreased to 28 770 and 24 926, respectively. Age standardised incidence rates increased by 2% in men (from 462.0 to 471.0 per 100 000) and 3% in women (493.1 to 507.3) over this time, while mortality rates fell by 15% in men (267.6 to 227.6) and 14% in women (222.4 to 191.9). There were large decreases in both incidence (from 97.0 to 75.0) and mortality (83.1 to 61.9) from lung cancer for men, while only small decreases in female rates were apparent. For colorectal cancer there was little change in the incidence rate for both sexes over time, but the mortality decreased from 30.5 to 24.7 for men and from 19.8 to 14.5 for women. Prostate cancer incidence rates increased by 79% from 48.7 to 87.1, while mortality decreased slightly from 13.5 to 12.3. Rates for female breast cancer incidence increased from 190.1 to 208.3, while mortality fell from 56.8 to 45.0.

Conclusion: UK mortality rates for the four major cancers and all cancers combined have declined over the 10 year period to 2002 in men and women. This is in contrast to a varying pattern of incidence showing substantive increases (prostate cancer), decreases (breast and male lung cancers), and relative stability (colorectal and female lung cancers). These results represent an interactions involving smoking behaviour, improved treatment, screening, and earlier detection.

83 FACTORS PREDICTING RECEIPT OF TREATMENT AND MORTALITY FOR PATIENTS WITH CANCER OF THE OESOPHAGUS AND GASTRIC CARDIA: A POPULATION BASED STUDY

D. P. Cronin, L. Sharp, A. E. Carsin, H. Comber.National Cancer Registry of Ireland, Elm Court, Boreenmanna Road, Cork, Republic of Ireland

Objectives: The prognosis for patients with cancer of the oesophagus or gastric cardia is poor. Clinical trial evidence suggests treatment can improve survival. However, the extent of use of different forms of treatment in community based clinical practice is nor clear, nor are the factors which predict whether or not a patient receives treatment. We investigated treatments received, associated factors, and effects on mortality.

Design: A population based study of incident cancers. Treatment within one year of diagnosis was recorded. Patients were followed from diagnosis to death or to 31 December 2002, whichever was the soonest. Logistic regression was used to explore factors associated with treatment receipt. Cox proportional hazards methods were used to determine factors associated with mortality separately for those having and not having surgery.

Setting: Republic of Ireland.

Subjects: 3231 patients aged ⩾20, diagnosed with primary cancer of the oesophagus or gastric cardia (ICD-O-2 C15.0–C15.9, C16.0) during 1994 to 2001 and registered by the National Cancer Registry; 1038 had squamous cell oesophageal cancer, 911 oesophageal adenocarcinoma, 738 gastric cardia adenocarcinoma, and 544 other/unspecified histologies.

Main outcome measures: Proportions receiving any surgery, any chemotherapy or any radiotherapy; odds ratios (OR); hazard ratios.

Results: 31% of patients had surgery, 32% radiotherapy, and 22% chemotherapy; 43% had no tumour directed treatment. Over time, the proportions receiving chemotherapy or radiotherapy increased significantly, while the proportion having surgery decreased. In multivariate analyses, older and unmarried patients were significantly less likely to receive any of the treatments, although for chemotherapy this was only evident in recent years. Women were less likely than men to have chemotherapy. Compared with patients with distant/unknown stage oesophageal cancer, those with local/regional oesophageal cancer were almost twice as likely to have surgery (OR = 1.85 (95% confidence interval, 1.48 to 2.30)) and those with gastric cardia cancer almost six times as likely (OR = 5.69 (4.08 to 7.94)). Patients with local/regional oesophageal cancer were most likely to receive chemotherapy or radiotherapy. Among patients who did not have surgery, stage and a combination of tumour site/histology and sex were significant predictors of mortality. In addition to these factors, among patients having surgery, period of diagnosis and age also predicted mortality.

Conclusions: In this community based setting, use of cancer directed treatment increased over time. Patient related factors predicted both the likelihood of treatment and mortality. Such disparities need to be addressed if survival and mortality rates for these cancers are to improve.

84 TRENDS IN PROSTATE CANCER AND ITS MANAGEMENT

S. McPhail1, T. Cross1, B. Cottier2, J. Verne1.1South West Public Health Observatory, Bristol, UK; 2National Cancer Services Analysis Team, Clatterbridge Centre for Oncology, UK

Background: In England, prostate cancer is the most commonly diagnosed cancer in men and the second most common cause of cancer death. The incidence of prostate cancer is rising rapidly and shows a striking geographical variation. However, the mortality is declining and has a much smaller geographical variation. It is thought that the rising incidence rate largely reflects the detection of asymptomatic prostate cancer through prostate-specific antigen (PSA) testing, and that this testing happens unevenly across the country.

Objectives: (1) To investigate trends in incidence, mortality, and number of surgical procedures relating to prostate cancer and their geographical variation across England; (2) to test whether this geographical variation is correlated with socioeconomic differences.

Design: Hospital Episode Statistics data were linked with data from English cancer registries, and Index of Multiple Deprivation (2004) data from the Office of the Deputy Prime Minister. Trend analysis was carried out and the degree of correlation between deprivation, incidence/mortality rates, and number of surgical procedures tested.

Results: Over the 10 year period 1994 to 2003, the increase in prostate cancer incidence rate varied between PCTs by a factor of up to 3. There has been little variation in mortality. Rates of biopsy and radical prostatectomy also increased rapidly, with very strong differences across the country, and by age band. A significant correlation, by PCT, was found between deprivation and incidence rate of prostate cancer, but not between deprivation and mortality.

Conclusions: The incidence and management of prostate cancer show rapid change and large geographical variation, while the mortality is more stable and uniform. Socioeconomic differences have been shown to play some part in explaining these trends.

85 POPULATION BASED TRENDS IN PSA TESTING AND PROSTATE CANCER PRESENTATION

A. Black, D. Connolly, A. Gavin, L. Murray.Northern Ireland Cancer Registry, Queen’s University Belfast, UK

Background: Prostate cancer (CaP) is the most commonly diagnosed cancer and the second leading cause of cancer death in men. Screening for CaP is not recommended in the UK and yet there has been a marked increase in the use of prostate-specific antigen (PSA) testing. CaP incidence is increasing while mortality rates are declining worldwide, which may in part be attributable to increased PSA testing. In the UK, there is a paucity of information regarding the trends in PSA testing and prostate cancer presentation.

Objective: To investigate population trends in PSA testing and presentation of CaP in a region where screening is not recommended.

Design, setting, and participants: The Northern Ireland Cancer Registry (NICR) maintains an electronic register of all PSA tests done in Northern Ireland, which is linked to the NICR database of incident cancers occurring within the region. Men who had their first PSA between 1994 and 2003 were included. Regression analyses were used to investigate the changes in PSA testing and cancer presentation.

Results: Between 1994 and 2003, 142 758 men had their first PSA test, of whom 45% had repeat testing. In the same period, 4301 men (3%) were subsequently diagnosed with prostate cancer. Population based rates of PSA testing remained stable, with approximately 12 000 men (6%) aged ⩾50 years having their first PSA test annually. Mean age at initial testing decreased from 67.1 years in 1994 to 59.0 years in 2003 (p<0.0001). Overall, 21% of initial PSA levels were ⩾4.0 ng/ml. The median level of PSA at first test decreased over the period from 2.2 ng/ml to 1.1 ng/ml (p<0.0001). The proportion of tests done in general practice increased from 50% to 75% (p<0.0001). Cancer incidence increased with a concurrent decrease in mean age at diagnosis from 75.6 to 69.2 years (p<0.0001). In men with cancer, the median level of initial PSA fell markedly, by year of diagnosis, from 53.3 ng/ml in 1994 to 5.2 ng/ml in 2004 (p<0.0001).

Conclusions: The shift in PSA testing from hospitals to general practice, younger age at testing and diagnosis, and lower levels of initial PSA suggests that GPs may be screening younger men for CaP. The lower levels of initial PSA in men diagnosed with cancer in recent years may indicate increasing use of prostate biopsy in men with lower PSA levels. Gleason scores are currently being obtained to investigate the clinical significance of cancers diagnosed in this period.

Lifestyle and health behaviour

86 TYPOLOGIES OF ALCOHOL CONSUMPTION IN ADOLESCENCE: PREDICTORS AND ADULT OUTCOMES

N. Cable, A. Sacker.Department of Epidemiology and Public Health, University College London, London, UK

Objective: To use theories of planned behaviour in an examination of the effects of alcohol expectancies, norms, and control beliefs on typologies of adolescent alcohol consumption and estimate their long term effect on alcohol misuse at age 30.

Design: Prospective cohort study.

Participants: Cohort members from the 1970 British Cohort Study, contacted at age 16 and 30.

Main outcome measures: Adolescent alcohol consumption is categorised into non/social occasional, social regular, heavy occasional, and heavy regular. These were identified according to the reported quantity and frequency of alcohol consumed in the week before the interview at age 16. Alcohol misuse in adulthood is identified by the CAGE questionnaire.

Results: Missing information was filled in by multiple imputation using the MICE method, yielding a sample size of 7023 men and 6896 women. Descriptive findings show that the majority of cohort members were non/social occasional drinkers at age 16. Nine per cent of women and 19% of men misused alcohol at age 30. Results from multinomial logistic regression models showed that the patterns of associations between expectancies, norms, and control beliefs and adolescent drinking typologies were similar for men and women. Positive alcohol expectancies were associated with all types of adolescent alcohol use in comparison with non/social occasional alcohol users. Norms influenced the frequency of consumption more than the quantity of alcohol consumed, whereas control beliefs were linked to consumption of large quantities of alcohol use. All men who used alcohol in adolescence were at risk for misusing alcohol at age 30, whereas the risk among women was limited to those who were heavy drinkers in adolescence.

Conclusions: Classifying drinking behaviour by quantity and frequency of alcohol consumption clarifies our understanding of the influences on adolescent alcohol use. The findings imply that policies aimed at reducing the numbers who misuse alcohol in adulthood have to be initiated in childhood. Disassociating positive expectations of alcohol beverages from alcohol use and involving parents are effective means of protecting youth from excessive alcohol consumption. In turn, reducing adolescent alcohol use will protect against alcohol misuse in adulthood, especially among men.

87 CHANGES IN SMOKING PREVALENCE AND CONSUMPTION RATES IN THE REPUBLIC OF IRELAND COMPARED WITH BAR WORKER STUDY BEFORE AND AFTER THE LEGISLATIVE BAN ON WORKPLACE SMOKING

B. A. Greiner1, B. J. Mullally1, I. J. Perry1, S. Allwright2.1Department of Epidemiology and Public Health, University College Cork; 2Department of Public Health and Primary Care, University of Dublin, Trinity College Dublin, Republic of Ireland

Background: On 29 March 2004, the Republic of Ireland became the first EU country to introduce a nationwide ban on workplace smoking including bars and restaurants. While the focus of this measure has been to protect worker health by reducing exposure to secondhand smoke, other effects such as reduced smoking prevalence and consumption are likely.

Objectives: (1) To determine changes in smoking within the general population before, one year, and two years after the legislative ban in comparison to changes in bar workers in the Republic and Northern Ireland during the same period; (2) to compare perceived influence of the ban on smoking amount with actual change in smoking consumption in bar workers.

Methods: Representative weighted data from the general population (TNS MRBI Phonebus), a continuous telephone survey with a fresh sample of 1000 adults every two weeks including sociodemographics and smoking habits is used to compare specific occupation variation in smoking behaviour; 329 bar staff working in the Republic of Ireland and Northern Ireland were surveyed in the six months leading up to the ban. One year later, 249 were followed (76% follow up rate). The overall sample included a random subsample of 129 Cork bar workers of whom 107 were followed up a year later. The Cork bar workers are currently being followed up a third time (two years after the ban) to study sustained effects. The standardised questionnaires included questions on smoking status and amount, sociodemographics, and respiratory health symptoms. Self reported smoking status was validated by cotinine measures.

Results: Since October 2003 the prevalence of smoking declined by 5.75% from 25.37% to 23.91% in October 2005 in the general population. Recent months indicated a slight increase in prevalence, which will be further analysed when data become available. We also observed a significant fall in smoking prevalence of bar workers (from 35.9% to 31.4%, p<0.01) and a significant fall in average cigarette consumption in smokers in the Republic (from 21 to 18, p = 0.002), but not in Northern Ireland; 3.3% of smoking bar workers perceived the ban to have been responsible for an increase in their smoking amount, while 23% perceived a reduction. Results of the ongoing data collection (two years after the ban) will also be presented.

Conclusions: The fall in prevalence and cigarette consumption in the Republic suggest that the legislation created an environment in which smoking is discouraged. However, recent increases in smoking prevalence in the Republic need to be further watched.

88 WAS JOHN REID RIGHT? SMOKING, CLASS, AND PLEASURE

I. Lang 1, E. Gardener2, F. A. Huppert3, D. Melzer1.1Epidemiology and Public Health Group, Peninsula Medical School, Exeter, UK; 2Institute of Public Health, University of Cambridge; 3Department of Psychiatry, University of Cambridge, Cambridge, UK

Objective: To assess whether there is a relation between smoking and quality of life, or the pleasure domain of quality of life, and to assess whether this differs by socioeconomic status group.

Design: A population based cross sectional study.

Setting: National population sample in England.

Participants: 9176 individuals aged 50 and over who participated in the Health Survey for England (HSE) and wave 1 of the English Longitudinal Study of Ageing (ELSA) in 2002.

Main outcome measures: Individuals were classified as never smokers, ex-smokers, and current smokers, and household wealth was used as a marker of socioeconomic position. Pleasure and quality of life were assessed using the CASP-19 instrument, a 19 point measure that covers four theoretical domains: control, autonomy, self realisation, and pleasure. The main outcome measures were experiencing lower than median levels of pleasure or quality of life.

Results: Mean levels of pleasure and quality of life were lower for those in lower SES categories. Quality of life was poorer for ex-smokers than for non-smokers, and poorer for smokers than for ex-smokers. We found that the odds ratio of experiencing lower than median levels of pleasure for all smokers compared with non-smokers was 1.33 (95% confidence interval, 1.17 to 1.51), and for smokers with low SES it was 1.42 (1.16 to 1.74). The same pattern of associations was found when other measures of quality of life were used.

Conclusion: We found no evidence to support a claim that smoking is associated with heightened levels of pleasure, either in those with low SES or in the general population. In fact, our results suggest that the opposite is the case: smoking is associated with lower levels of pleasure and poorer overall quality of life. Policy decisions on smoking should consider its potentially harmful impact on quality of life and pleasure as well as on other aspects of health.

89 WOMEN WHO REPORT SEX WITH WOMEN IN BRITAIN: NATIONAL PROBABILITY DATA ON PREVALENCE, SEXUAL BEHAVIOURS, AND HEALTH OUTCOMES

J. V. Bailey1, C. H. Mercer2, A. M. Johnson2, B. Erens3, K. Wellings4, K. A. Fenton2,5, A. J. Copas2.1Department of Primary Care and Population Science, University College London, UK; 2Centre for Sexual Health and HIV Research, University College London, UK; 3National Centre for Social Research, London, UK; 4London School of Hygiene and Tropical Medicine, UK; 5Centers for Disease Control, Atlanta, USA

Objectives: To estimate the prevalence of same sex sexual experience among women, and to compare women who do and do not report female partner(s) in terms of sociodemographics, sexual, and general health risk behaviours and outcomes.

Design, setting, and participants: Probability survey of Britain’s general population aged 16 to 44, conducted in 1999 to 2001 (n = 6399 women) using face to face interviewing and computer assisted self interviewing.

Results: 4.9% of women reported same sex partner(s) ever (WSW) (median age on first genital contact, 22 years); 2.8% reported same sex partner(s) in the past five years (n = 178); 85% also reported male partner(s) in this time frame (n = 151). This group of women who have sex with women and men (WSWM) were significantly more likely to report adverse risk behaviours than women reporting exclusively male partners in the past five years (WSEM): WSWM reported greater numbers of male sexual partners ever (15 v 4, p<0.0001), more frequent “unsafe sex” (age adjusted odds ratio (OR) = 7.17) and significantly greater consumption of tobacco, alcohol, and intravenous drugs. WSWM had increased likelihood of induced abortion (age adjusted OR = 3.07) and sexually transmitted infection (STI) (age adjusted OR = 4.41) than WSEM. There were no STIs among women who reported sex exclusively with women (WSEW) in the previous five years (n = 31).

Conclusions: A history of sex with women is a marker for increased risk of adverse sexual, reproductive and general health outcomes, particularly more risky heterosexual practice and greater substance misuse than WSEM. Addressing the needs of hidden populations such as WSW can be difficult, but non-judgmental history taking from all female patients without making assumptions about sexual behaviour should help practitioners discuss risks that WSW may face.

Maternal and child health

90 ECTOPIC PREGNANCY: ANALYSIS OF TRENDS AND REPRODUCTIVE OUTCOMES IN A POPULATION FROM 1950 TO 1999

S. Bhattacharya1, A. Shetty2, D. M. Campbell2.1Department of Public Health, University of Aberdeen; 2Department of Obstetrics and Gynaecology, University of Aberdeen, Aberdeen, UK

Background: Ectopic pregnancy continues to be an acute emergency and contributed to 11 maternal deaths in the 2000 to 2002 CEMD in the UK. Statistics show a steady rise in ectopic pregnancies over time.

Objective: To analyse trends in the incidence of ectopic gestation in Aberdeen city and district between 1950 and 1999 and compare the reproductive outcomes following an ectopic pregnancy with those following a spontaneous miscarriage.

Methods: Data were extracted on all women who had had an ectopic pregnancy or a miscarriage between 1950 and 2004 from the Aberdeen maternity and neonatal databank. Trends in the incidence of ectopic pregnancies were calculated using the estimated number of pregnancies as denominator on a yearly basis. A retrospective cohort study design was used to compare reproductive outcomes, where exposed cohorts were all women who had had an ectopic first pregnancy, while unexposed cohorts were those women who had had an initial miscarriage. Reproductive, obstetric, and neonatal outcomes were compared in the three groups of women using univariate and multivariate statistical methods. A nested case–control study was also conducted to determine the factors influencing conception following an ectopic pregnancy.

Results: During the said time period, 1505 ectopic pregnancies were recorded. There was a steady increase in the incidence of ectopics from 0.3% in the 1950s to 0.9% in the 1990s. Of these women, 546 had had an ectopic gestation in their first pregnancy. No further pregnancies were reported in 257 women (47.1%) with ectopics or in 2644 women (30.3%) with miscarriages (p<0.05). After adjusting for confounders, the women who had had a previous ectopic were 10 times more likely to have another ectopic (odds ratio (OR) = 10.7 (95% confidence interval, 6.7 to 16.9)), but a further miscarriage was more likely in women who had a previous miscarriage (OR = 0.63 (0.46 to 0.84)). Among those who proceeded to have a term pregnancy, obstetric and neonatal outcomes were no worse in women with previous ectopics than those with miscarriages. Emergency caesarean sections were more common in women with previous ectopics (OR = 1.6 (1.2 to 2.2)). The factors determining future conception were maternal age and operative management of the ectopic pregnancy, with laparoscopic removal of ectopic having the best outcomes with regard to future fertility.

Conclusions: Women with previous ectopic pregnancy have a worse reproductive outcome, but similar obstetric and neonatal outcomes, when compared with women who have had a previous miscarriage.

91 THE PONDER YEARS: POSTNATAL DEPRESSION, ECONOMIC EVALUATION, AND RANDOMISED TRIAL

C. J. Morrell1, R. Warner2, P. Slade3, S. Dixon1, S. J. Walters1, G. Paley4, T. Brugha5, N. Mathers1, E. McGuirk6.1ScHARR, University of Sheffield; 2Sheffield Community Health; 3Psychology Department, University of Sheffield, Sheffield, UK; 4Leeds Mental Health Trust Specialist Psychotherapy Service and Leeds University, Leeds, UK; 5Leicester University, Leicester, UK; 6Broxtowe and Hucknall PCT, UK

Objective: To assess the costs and effectiveness of psychological interventions (cognitive-behavioural approach and person centred approach) by health visitors (HVs) for women with postnatal depression (PND).

Design: A cluster randomised controlled trial with 18 month follow up

Setting: 100 GP/HV clusters in the former Trent region.

Participants: 4073 women consented to take part; 3437 returned a six week postal questionnaire and 2877 returned a six month postal questionnaire.

Interventions: One-hour visits up to eight times weekly by an HV trained in offering either a cognitive-behavioural approach or a person centred approach to supporting women with postnatal depression.

Main outcome measures: The trial examined outcomes among women, their partners, and their infants to 18 months postnatally. The primary outcome, on which the sample size was based, was the difference in the proportion of women in the intervention and control group who scored 12 or more on the Edinburgh Postnatal Depression Scale (EPDS) at six months postnatally. Also at 6, 12, and 18 months, the SF-12, Clinical Outcomes in Routine Evaluation, the State-Trait Anxiety Index, the Parenting Stress Index, and the dyadic adjustment scale were used with women and their partners.

Results: Of the 2659 women within 100 clusters who returned both a six week and a six month follow up questionnaire, there were 271 (15.5%) in the intervention group and 147 (16.1%) in the control group who scored 12 or more on the EPDS at six weeks postnatally. The early analysis indicates a difference in the proportion of these women scoring 12 or more on the six month EPDS, a difference in these women’s six month mean EPDS scores, a difference in the proportion of all intervention group women scoring 12 or more on the six month EPDS, and a difference in all the intervention group women’s six month mean EPDS scores, compared with the control group. There differences were all statistically significant, and all in favour of the intervention group. The early economic analysis indicates a difference in the length of visit in the intervention group compared with the control group. This analysis is ongoing.

Conclusions: The combined experimental HV psychological intervention was associated with better postnatal health, as measured by the EPDS scores at six months, than usual HV care.

92 SYSTEMATIC REVIEW OF THE FETAL EFFECTS OF LOW-MODERATE AND BINGE DRINKING IN PREGNANCY

J. Henderson, J. Gibson, R. Gray.National Perinatal Epidemiology Unit, University of Oxford, Oxford, UK

Background: Recently there have been a number of concerns expressed about the effects of low to moderate prenatal alcohol exposure on the developing fetus and child. Some authorities have claimed that abstinence from alcohol is the only safe message in pregnancy. However, it is unclear whether evidence about the potential adverse effects of low to moderate consumption of alcohol in pregnancy would support such a view.

Aim: To systematically review the available evidence on studies in humans on the effects of low to moderate level prenatal alcohol consumption (up to 10.4 UK units a week) and binge drinking (more than 6 UK units on a single occasion) compared with consumption of no alcohol, on the developing embryo, fetus, and child.

Methods: A computerised search strategy was run in Medline, Embase, Cinahl, and PsychInfo for the years 1970 to 2005. Outcomes considered were miscarriage, stillbirth, antepartum haemorrhage, intrauterine growth restriction, prematurity, birth weight, small for gestational age at birth, postnatal growth, birth defects including fetal alcohol syndrome, and neurodevelopmental outcomes. Titles and abstracts were read by two researchers, inclusion/exclusion being decided according to prespecified criteria. All included papers were then obtained and read in full by two researchers to decide on inclusion. The papers were assessed for quality using the Newcastle-Ottawa quality assessment scales and data were extracted.

Results: The search resulted in 3630 titles and abstracts which were narrowed down to 66 relevant papers. At low to moderate levels of consumption, there were no consistently significant effects of alcohol on any of the outcomes considered. However, binge drinking did appear to have an adverse effect on long term neurodevelopmental outcomes. Many of the reported studies suffered from significant limitations making interpretation problematic.

Conclusions: This systematic review found no convincing evidence of adverse effects of prenatal alcohol exposure at low to moderate levels of exposure. However, binge drinking was associated with consistently poorer neurodevelopmental outcomes.

93 ETHNIC DIFFERENCES IN BREAST FEEDING INITIATION AND CONTINUATION IN THE UK: FINDINGS FROM THE MILLENNIUM COHORT STUDY

Y. J. Kelly, R. G. Watt, J. Y. Nazroo.Department of Epidemiology and Public Health, University College London, London, UK

Background: It has previously been reported that among different ethnic minority groups, patterns of breast feeding vary considerably. However, little is known about the factors that might explain differences across and within different ethnic groups.

Objectives: To examine patterns of breast feeding initiation and continuation among an ethnically diverse sample of new mothers, and to assess the effects of demographic, psychosocial, economic, and cultural factors on ethnic differences in breast feeding practices.

Design: First sweep of the Millennium Cohort Study.

Setting: All four countries of the United Kingdom.

Participants: Complete data were available for 17 446 nine month old infants and their mothers (96% of total sample).

Main outcome measures: Breast feeding initiation and continuation at three months.

Results: Multivariate models were run and, after adjustment for demographic, economic, and psychosocial factors, the breast feeding advantage of ethnic minority groups intensified. The odds of breast feeding initiation compared with white mothers were: Indian, odds ratio (OR) = 2.6 (95% confidence interval, 1.7 to 4.0); Pakistani, OR = 3.2 (2.6 to 4.0); Bangladeshi, OR = 7.9 (5.3 to 11.6); black Caribbean, OR = 9.3 (4.9 to 17.4); and black African, OR = 13.6 (7.6 to 23.7). Further adjustment for a marker of cultural tradition attenuated these relationships but all remained statistically significant. After adjustment for demographic, economic, and psychosocial factors, the odds of any breast feeding at three months after birth compared with white mothers were: black African, OR = 5.3 (3.3 to 8.7); black Caribbean, OR = 3.9 (2.7 to 5.6); Indian, OR = 2.1 (1.6 to 2.7); Bangladeshi, OR = 1.6 (1.2 to 2.1); and Pakistani, OR = 1.1 (0.9 to 1.5). Additional adjustment for a marker of cultural tradition attenuated the relation for Indian, Bangladeshi, black African, and Pakistani mothers. Models run for any breast feeding continuation at four and six months were consistent with these results.

Conclusions: These economic, psychosocial, demographic, and “cultural” influences have important policy implications. There is concern that integration into the dominant culture for more “traditional” mothers within ethnic minority groups may reduce breast feeding rates. This, together with the different socioeconomic and cultural profiles of ethnic groups, needs to be taken into account by policy developers aiming to increase breast feeding rates.

Plenary session II

94 CHANGING POOR RATES OF ATTENDANCE IN PSYCHIATRIC OUTPATIENT CLINICS: THE LEEDS PROMPTS TRIAL

J. Kitcheman1, I. Kader2, M. Dinesh2, A. Pervaiz3, G. Brookes4, C. E. Adams1.1Department of Psychiatry, University of Leeds; 2St James’ University Hospital, Leeds; 3St Luke’s Hospital, Huddersfield; 4Seacroft Hospital, Leeds, UK

Background: The non-attendance rate at psychiatric outpatient departments is high and considered wasteful of time and resources by clinicians, managers, and policymakers. A Cochrane systematic review has suggested that text based prompts may be effective in reducing the non-attendance rate (relative risk (RR) of non-attendance = 0.6 (95% confidence interval, 0.4 to 0.9)), but these findings were based on two small trials from the USA (total n = 200). The present study attempted to replicate this finding within the NHS setting. No research of this kind has been undertaken in the UK before.

Methods: The study was a pragmatic block randomised trial with full concealment of allocation. Experimental intervention: a letter alerting first time attendees to attend their psychiatric out patient department, giving (1) the time of appointment; (2) the nature of the appointment and the outpatient department; (3) a map with details of bus routes. The letter arrived 24 to 48 hours before the index appointment. Control intervention: standard care, including the local partial booking service. Primary outcome: attendance at first appointment, but follow up for 12 months for secondary service outcomes.

Results: We randomised 764 people (mean age 36, 48% women) from across the psychiatric service (36% addiction clinic, 20% liaison, 19% psychosexual medicine, 17% general adult, 7% chronic fatigue service, 1% women’s service, 1% self harm) and attained good levels of follow up (99% primary outcome, 99% for secondary). Gentle written prompts decreased initial psychiatric outpatient non-attendance within the NHS (n = 761, ARR 7%, RR = 0.76 (0.59 to 0.98), NNT = 16 (10 to 187)). Viewed with all other relevant evidence these findings are both consistent and compelling (n = 1184, 5 RCTs, RR non-attendance = 0.72 (0.58 to 0.89), I2 = 12%, NNT = 11 (8 to 26)). In this trial, at the end of one year hospital admission rates were lower for people who had been prompted to attend, although not to conventional levels of statistical significance (n = 748, RR = 0.16 (0.02 to 1.34)).

Conclusions: This large pragmatic randomised trial, supported by local NHS Proprieties and Needs funding (2 years, 50% R1B), illustrates how informative trials can be undertaken even within a busy working environment. Leeds PROMPTS shows that, by use of a simple administrative technique, non-attendance can be considerably reduced and that there may even be longer term benefits. Psychiatric outpatient receptionists frequently double book patients to ensure appointments are filled. Employing successful techniques such as prompting to decrease non-attendance yet further may have important manpower implications.

95 ITALY: THE LARGEST EUROPEAN COUNTRY TO BAN SMOKING IN ALL ENCLOSED PLACES: A FIRST YEAR EVALUATION

D. Galeone1, S. Vasselli1, L. Spizzichino1, P. D’argenio1, G. Laurendi1, N. Binkin2, D. Greco1.1Italian Ministry of Health, Prevention Department; 2National Health Institute, Rome, Italy

Background: On 10 January 2005, a smoking ban in all enclosed places was implemented in Italy. Support and evaluation of this law includes: a hotline, surveys of bar and restaurant owners, a national behavioural risk factor survey, inspections, and market research on sales of tobacco and smoking cessation products.

Results:Hotline: During the first month, 4032 calls were received; <3% were complaints about the law. Bar and restaurant survey: In the 1600 locals visited observed smoking declined from 31% to 0.3% after implementation. Most proprietors (74%) reported customers are satisfied with the law. Only 12% lamented a significant decline in revenues; 80% of the smoker owners had reduced cigarette use or tried to quit. Compliance with the smoke-free legislation: Market research on public opinion about the law shows that it is nearly universally recognised as a positive public health measure. The survey conducted by the National Institute of Health shows high levels of public support: 90.4 % of people (smokers and non-smokers) are in favour of the law, 87.3% feel the law is respected in public places, and 69.1% feel the law is respected in workplaces. The survey was conducted in March–April 2005 among a nationally representative sample of 3114 people aged 15 years and older. BRFS: Among smokers, 40% decreased the number of cigarettes smoked since the law went into effect. Inspections: Fines were issued in 4.9% of the inspections conducted from January to March 2005, mostly for lack of compliance with required signage rather than for smoking where forbidden. Marketing data: Between January and November 2005, sales of cigarettes decreased nearly 5.7% in comparison with the same period in 2004. During the first nine months after implementation, a substantial increase in the sales of cessation products (+90%) was observed.

Conclusions: Italy is the largest European country to have banned smoking in all enclosed places. The law has been clearly understood, generally respected, and widely supported by principal stakeholders. Furthermore, it seems to have resulted in a decrease of tobacco consumption.Fine modulo

96 A RANDOMISED CONTROLLED TRIAL OF HOME BASED MEDICATION REVIEW AND LIFESTYLE ADVICE BY COMMUNITY PHARMACISTS FOR PATIENTS WITH HEART FAILURE

R. Holland1, I. Brooksby2, E. Lenaghan1, K. Ashton1, L. Hay3, R. Smith1, L. Shepstone1, A. Howe1, A. Lipp4, C. Daly5, I. Harvey1.1School of Medicine, Health Policy and Practice, University of East Anglia, Norwich, UK; 2Norfolk and Norwich University Hospital NHS Trust, Norwich, UK; 3Information Services (ISD), Edinburgh, UK; 4Great Yarmouth Primary Care Trust, Great Yarmouth, UK; 5Academic Pharmacy Practice Unit, University of East Anglia, Norwich, UK

Objectives: Multidisciplinary interventions including medication review, symptom self management, and lifestyle advice reduce admissions and mortality in heart failure. However, most trials have used small numbers of specialist staff to deliver interventions. Previous studies have not tested whether community pharmacists can deliver effective heart failure interventions.

Design: Randomised controlled trial.

Setting: Home based intervention in heart failure patients after discharge from acute hospitals in Norfolk.

Participants: 339 participants were recruited who were an emergency admission, had heart failure, and were to be discharged home; 46 patients were excluded after randomisation (18 died before discharge, for 17 the heart failure diagnosis was not confirmed, seven were not discharged home, and four were excluded for other reasons), leaving 293 patients in the trial (149 intervention, 144 control).

Intervention: The intervention involved two home visits by one of 17 community pharmacists within two and eight weeks of discharge. Pharmacists reviewed drugs and gave symptom self management and lifestyle advice. Controls received usual care.

Main outcome measures: The primary outcome was total hospital readmissions (all-cause) at six months. Secondary outcomes included mortality, quality of life (Minnesota Living with Heart Failure questionnaire (MLHFQ) and EQ-5D), medication adherence (Medication Adherence Rating Scale), and behaviour change (European heart failure self care behaviour scale). Primary outcome data were available for 290 participants (99%).

Results: 136 intervention patients (92%) received one or more visit. Interventions lasted a mean of six hours, including approximately two hours in patients’ homes, two hours on administration, and two hours travelling. There were 132 admissions in the intervention group and 113 in controls (rate ratio = 1.13 (95% confidence interval, 0.88 to 1.45), p = 0.35, Poisson model); 31 intervention patients died, compared with 24 controls (hazard ratio = 1.24 (0.72 to 2.11), p = 0.44). While EQ-5D scores favoured the intervention group, MLHFQ scores favoured controls, but neither difference was statistically significant. Both groups reported good medication adherence with no difference observed, and improved heart failure self care, with results favouring the intervention group non-significantly.

Conclusion: This trial failed to show that community pharmacist visits to heart failure patients post-hospital discharge providing advice and review can lead to the gains in admissions, mortality, or quality of life observed in more intense specialist services. Given that heart failure accounts for 5% of hospital admissions, these results present a problem for policy makers who are faced with a shortage of specialist provision yet desire services that are widely available and could reduce these admissions.