Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Postal surveys have been used for many years by researchers to obtain information about the attitudes, knowledge and self reported behaviour of general practitioners. It is widely believed that poor response rates undermine the validity of survey research,1 2 but only a few studies have tackled the methodological problem of whether a low response rate matters.3
Non-responders to general practitioner postal surveys have been characterised, as being older, isolated and less well qualified,4 however these results relate to individual questionnaires and univariable analyses. To the best of our knowledge no studies have examined the characteristics of GPs who routinely do not return postal questionnaires. These GPs could form an identifiable and sizable proportion of general practice and may be an important source of non-response bias.
Methods and Results
We identified five postal questionnaires sent to GP principals in Avon (which includes Bristol, Bath and several rural towns) between 1994 and 1999. Principal investigators for each project were contacted and asked to provide a list of GPs included in their study and whether or not they responded. The surveys concerned mental health services (response rate 65%), treatment of hypertension (61%), diagnosis of asthma (42%), diagnosis and treatment of acute bronchitis (72%) and emergency contraception (83%).
We obtained information about GPs personal and practice characteristics from Avon Health Authority and from data supplied by the GPs themselves in the five questionnaires. Some of these characteristics had previously been shown to influence response rates, namely age, gender, years in general practice, postgraduate qualifications, the number of partners in a practice and whether it was a postgraduate training practice. In addition we assessed the role of factors chosen to reflect (a) indicators of quality—audit participation, level of computerisation (b) services provided to patients—being on the minor surgery, child health surveillance or obstetric lists (c) patient demand—Townsend score, consultation rate and (d) undergraduate teaching or research interest—being a teaching practice, practice participation in an primary care based randomised controlled trial (RCT) in which all Avon practices were invited to participate.
To investigate the characteristics of routine non-responders we fitted univariable and multi-variable logistic regression models. The outcome variable used in these analyses was a binary variable indicating whether or not the GP was a “serial non-responder” (defined a priori as those failing to respond to all, or all but one of the five questionnaires).
Of 580 GPs in Avon in 1999, 355 had been sent all five questionnaires. Of these GPs 25.9% responded to all five questionnaires, 26.2% to four, 19.2% to three, 13.5% to two, 9.6% to one and 5.6% none; 15.2% were classified as being “serial non-responders”. The characteristics of GPs significantly associated (at p<0.05) with non-response in the univariable and multi-variable models is shown in table 1.
“Serial non-responders” tend to be older, less likely to possess a postgraduate medical qualification or belong to a practice that is involved with postgraduate or undergraduate training. Previous analyses of non-responders have only examined univariable associations between GP characteristics and non-response. Clearly factors such as age, membership of the Royal College of General Practitioners, belonging to a teaching or training practice are interrelated, but our multivariable analysis enables us to estimate independent associations.
Non-response bias in postal questionnaires will only occur if there are differences between responders and non-responders in their knowledge, attitudes and beliefs that then lead to systematic differences in measured outcomes. One study demonstrated differences in responses between GPs returning a postal survey about their work with patients who misuse alcohol and non-responders who were subsequently contacted by telephone.5 The differences identified in our analysis indicates that non-response bias does occur in most surveys and that this could affect survey results. Therefore researchers should be aware of the factors that have been shown to increase response rates such as incentives and reminder letters,6 but should also assess non-response bias by performing appropriate weighting for non-response and sensitivity analysis by age, professional and practice characteristics.
Funding: this study was supported by a grant from the South and West Research and Development Directorate.
Conflicts of interest: none.