Statistics from Altmetric.com
Mailed questionnaires are often used for data collection. When collecting information from large, geographically dispersed populations, the mailed questionnaire may be the only practical and financially viable method available for researchers.1 However, non-response to mailed questionnaires reduces the effective sample size and can introduce bias.2 Strategies that can increase response to mailed questionnaires have been identified and include the use of monetary incentives.3
Several reviews and meta-analyses of studies of the effect of monetary incentives on questionnaire response have been published in the past 30 years,4,5,6,7,8,9,10 but none has been based on a systematic search of the literature. We conducted a meta-analysis of the data obtained from a systematic search of the literature to quantify the increase in response attributable to a monetary incentive.
A systematic search was initially made for all randomised controlled trials of any method to influence response to a mailed questionnaire.3 We updated the systematic search for trials and included all trials published by February 2003. There was no restriction by language, questionnaire topic, or study population. We searched 14 electronic bibliographic databases, the reference lists of relevant trials, and all issues of two journals in which the largest number of eligible trials had been published (American Journal of Epidemiology and Public Opinion Quarterly). The reports of potentially relevant trials were obtained and two reviewers assessed each for its eligibility. We estimated the sensitivity of the combined search strategy (electronic searching and manual searches of reference lists) by comparing the trials identified with this strategy with those identified by manually searching journals. The authors of eligible trials were contacted by mail or e-mail for any information required for the review that was missing from the published reports, and were also asked whether they knew of unpublished trials.
Data were extracted from each study on: amount and currency of the monetary incentives used, whether incentives were mailed with questionnaires (“unconditional” incentives) or given to participants after questionnaires had been returned (“conditional” incentives); the year the study was conducted, the numbers of participants randomised, and the numbers who responded. When the years that studies were conducted were unknown they were estimated by subtracting from the year of publication the average delay between study year and publication year calculated from those studies for which this information was available. The amounts of monetary incentive were standardised by converting them to US dollars, and then updating them to present day values using the American Institute for Economic Research’s Cost of Living Calculator.11 To investigate whether pre-specified study characteristics modify the effect of an incentive on response, data were extracted from each report on: the number of pages used in each questionnaire and whether: the questionnaire topic was health related; participants were groups of professionals (for example, physicians); a non-monetary incentive was used in addition to the monetary incentive; the organisation conducting the study was an academic institution; a reply paid envelope was provided; participants were notified in advance of the questionnaire being sent; the questionnaires were sent by special delivery, and whether follow up reminders were sent to non-respondents.
For each trial identified we used logistic regression to estimate the odds ratio for response per $0.01 increase in the amount of incentive offered unconditionally and conditionally. Our a priori hypothesis was that there are diminishing marginal gains in response for each additional $0.01 increase in incentive given. We pooled the logistic regression coefficients in a series of random effect meta-analyses stratified according to the minimum and maximum amount offered in each trial: $0, $0.01–0.49, $0.50–0.99, $1.00–1.99, $2.00–4.99, $5.00 and over, and by whether they were conditional or unconditional. Combined odds ratios were calculated within strata as a weighted average of the odds ratios from each trial, using standard errors as weights and incorporating an estimate of the between-study heterogeneity into the weights.12 Heterogeneity among the trial coefficients was assessed with a χ2 test using a 5% level for significance. In the absence of significant statistical heterogeneity, we assessed evidence for selection bias (for example, publication bias) using Egger’s weighted regression method and Begg’s rank correlation test and funnel plot.13
We used the Stata statistical software (StatCorp, College station, TX) to fit a piecewise logistic regression model to describe the relation between response rate and amount of incentive (see appendix available on line http://www.jech.com/supplemental). The regression coefficients in this model estimate the odds ratio for response for each $0.01 increase in incentive in each of the ranges: $0.00–0.49, $0.50–0.99, $1.00–1.99, $2.00–4.99, and $5.00 and over. Standard errors from the model were adjusted using the χ2 goodness of fit statistic to allow for over-dispersion. The model was also extended to investigate interactions between the amount of money offered and the study characteristics, such as whether questionnaire topics were health related.
The systematic search for randomised controlled trials of methods to influence response to mailed questionnaires yielded a total of 28 994 records of potentially relevant reports. After screening records and obtaining copies of the reports considered being relevant for further inspection, 325 reports were found to contain one or more such trials. Of the 325 reports, 69 included trials of monetary incentives on response (table 1) where a total of 85 671 participants had been randomised. Contact was made with 22 authors and no unpublished trials were identified. Three reports contained the results of two trials and two reports contained the results of three trials and so there were 76 trials in total. Thirty three trials had evaluated two or more alternative amounts of money and the remainder had compared a single amount with no incentive. In addition, 10 trials had factorial designs combining investigation of a monetary incentive with one other factor (nine trials) or two other factors (one trial). In the following analyses each of these was considered as two (or four) separate smaller trials defined according to levels of the other factor(s), giving a total of 88 trials available for analysis. Among the 88 trials, 73 (83%) evaluated unconditional incentives only, six (7%) evaluated conditional incentives only, and nine (10%) evaluated both conditional and unconditional incentives. A total of 79 (90%) studies were conducted by academic institutions, 28 (32%) were known to have studied health related topics, 36 (41%) were known to have specifically targeted professional groups, 50 (57%) were known to have sent reminders to non-respondents, 51 (58%) were known to have included reply paid envelopes, and eight (9%) were known to have also used non-monetary incentives. The studies were published in a range of journals covering marketing, psychological, and medical research, and the average time between trials being conducted and being published was three years. The earliest trial located was published in 1940 and the most recent in 2003. Present values of monetary incentives ranged from $0.04 to $161. There was significant heterogeneity among the odds ratios for response per $0.01 increase in the amount of incentive offered derived from each trial (p<0.00001). Among the 82 trials that evaluated unconditional incentives, 80 (98%) found a positive effect on response, and among the 15 trials of conditional incentives, 14 (93%) found a positive effect.
Table 2 shows the combined odds ratios for response per $0.01 incentive increase among the 82 trials that evaluated unconditional incentives, stratified by the minimum and maximum amounts given in each trial.
Figure 1 shows the results for the 74 trials in the five strata in which the minimum amount given was $0. Among trials in which the maximum amount given was less than $0.50 (fig 1(A)—15 trials) the odds of response per $0.01 were increased by 1.2% (95% confidence interval (CI): 0.6% to 1.7%). In the remaining strata, the pooled effect sizes were progressively smaller as the maximum amount given increased. There was significant heterogeneity among the trial results within each of these five strata (p<0.05). Tests for selection bias were not conducted in the remaining strata, as there were too few trials.
There were 15 trials that had evaluated conditional incentives (not shown in the figure). When these trials were stratified by the minimum and maximum amount offered, two strata contained more than one trial. There were four trials in which the maximum incentive offered was between $2 and $4.99, and among these trials the odds of response per $0.01 were increased by 0.1% (95% CI: 0.0% to 0.2%). There were eight trials in which the maximum incentive offered was $5 or over, and among these trials the odds of response for each $0.01 increase were almost unchanged (95% CI: 0.0% to 0.1%). There was significant heterogeneity among the trial results within both strata (p<0.05). In these two strata, the increases in the odds of response per $0.01 were lower than those for unconditional incentives.
Piecewise logistic regression
Using the 82 trials that evaluated unconditional incentives, the piecewise logistic regression model estimated the odds ratio associated with a $0.01 increase in each of five incentive ranges. The fitted relation between odds of response and the amount of incentive given is shown in figure 2, with coefficients from the model shown in table 3. Between $0.01 and $0.49, the odds of response per $0.01 were increased by 1.15% (95% CI: 0.72% to 1.58%). The effect of a $0.01 increase above $0.50 was smaller in each successive incentive range, with the effect of an increase between $2 and $5 still achieving statistical significance.
In an investigation of the extent to which the relation between amount of unconditional incentive and odds of response was dependent on trial characteristics, the only independently statistically significant effect modifier was whether or not a reminder was sent. A steeper relation was consistently seen if no reminder was sent, but the limited number of studies in different incentive ranges when split up by whether or not a reminder was sent meant that the exact form of the relation could not be reliably identified. We explored the possibility that the impact of a monetary incentive on questionnaire response may have diminished over time. When studies conducted after 1975 were considered separately, we found that for incentive amounts up to $0.50 each additional $0.01 increased the odds of response by about 2% (p = 0.002). When health related trials were considered separately, the estimates for the effect of each $0.01 on the odds of response were increased slightly for amounts up to $1, and decreased slightly for amounts over $1, however none remained statistically significant.
The results of our systematic review and meta-analysis of randomised controlled trials confirm that monetary incentives can increase response to mailed questionnaires. Our stratified meta-analysis and piecewise logistic regression model provide evidence for a non-linear relation between amount of money offered and response: we found that the marginal increase in response for each $0.01 increase in incentive is highest for amounts up to $0.50. The effect on response for each additional $0.01 given above $0.50 was smaller and decreased monotonically, but was still statistically significant up to $5. This suggests that a $0.01 increase means a lot more to a study participant when offered $0.25 (a 4% increase), than when offered $2.50 (a 0.4% increase). We also found evidence that this relation is steeper when follow up reminders are not used and when incentives are given unconditionally. We found no evidence that the impact of monetary incentives on response has diminished over time. Before we consider the implications of these results for the design of mailed data collection strategies, several methodological issues with a bearing on the validity of the results must be addressed.
Strengths and weaknesses of the study
The most important step in the conduct of a systematic review and meta-analysis of randomised controlled trials is to identify and include all the relevant trials.14 In our meta-analysis we estimate that our search strategy retrieved nearly all eligible trials (sensitivity 95%; CI: 84% to 99%) and that we missed very few relevant records during screening.15 However, we excluded studies where we were unable to confirm with the authors that the participants had been randomly allocated to intervention or control groups. We did not examine whether the inclusion of these trials in our meta-analysis would have significantly changed our results. We were successful in contacting some of the authors of the included studies to ask about unpublished trials, but none was identified. We cannot rule out the possibility that other trials of monetary incentives have been conducted and remain unpublished, and that our results may therefore be biased. We did not conduct meta-analytic tests for selection bias because of significant statistical heterogeneity among the results of the included trials.13
The results of some of the included trials may be biased if the allocation of participants to intervention or control groups was inadequately concealed at the time of randomisation.16 The methods of randomisation used to allocate participants were only described in nine reports, and we were unable to contact many authors of the other trials to obtain this information. We were therefore unable to investigate whether the exclusion of trials with inadequate allocation concealment would have significantly changed our results. Another potential source of bias in the included trials is that attributable to losses to follow up. However, the outcome we analysed (whether or not a questionnaire was returned) is known for all participants in all the included trials.
What is already known on this subject
Mailed questionnaires are often used for data collection in epidemiological studies, but non-response reduces the effective sample size and can also introduce bias. Monetary incentives (cash or cheques) are one method known to increase response rates.
Meta-analyses of the estimated effects of monetary incentives on questionnaire response have been published but have not been based on systematic searches of the literature.
Our study has considered the amount of response that can be gained by using a monetary incentive. We did not investigate any effects that incentives may have on the accuracy and validity of the data collected. We are therefore unable to say whether the use of monetary incentives affect the quality of response, only that they seem to increase the quantity of response. Also, the overwhelming majority of the studies included in our meta-analysis were conducted in the developed world. Whether our results about the effects of a monetary incentive on questionnaire response may be generalised to the developing world remains a matter for judgement.
Strengths and weaknesses in relation to other studies
This is the first meta-analysis of studies of the effect of monetary incentives on questionnaire response to be based on a systematic search of the literature. Previous reviews and meta-analyses have only used some of the literature drawn from specific disciplines (for example, marketing).4,5,6,7,8,9,10 One of these studies reached a similar conclusion regarding the relation between the amount of incentive and response.4 Based on an un-weighted meta-analysis of the data from 17 studies of monetary incentives on response, the authors proposed a “rule of thumb” for a 1% decrease in non-response for each $0.01 increase in incentive up to 40%. In a meta-analysis of data from 18 American studies of unconditional incentives, no relation was found between the size of the incentive and increases in response rates,8 whereas another found that increases in monetary incentive bring decreasing marginal gains in response.7 A meta-analysis of 38 randomised and quasi-randomised trials of monetary incentives concluded that unconditional incentives are the most effective, however, the relation between amount offered and response was not investigated.10
This study shows that researchers should include at least a small amount of money with questionnaires rather than give no incentive at all.
Local research ethics committees, when considering study designs, should be aware that small payments to participants for completion of questionnaires can reduce non-response.
Implications for the design of mailed data collection strategies
In research using mailed questionnaires to collect data, small monetary incentives may be effective in increasing response compared to offering no incentive at all. The response rates that may be expected at different amounts of incentive given unconditionally are shown in figure 3. Depending on the study resource, small amounts can be offered to participants as tokens of appreciation or larger amounts offered as compensation for their time. Although ethical considerations will need to be taken into account before payments to participants are included in the study design, such inducements should be acceptable if kept small, or when the amount of time and effort required of participants exceeds a certain threshold. Before deciding on the amount of incentive to use, other additional related costs need to be considered, such as the costs of printing, packing, and mailing the questionnaires. Larger incentives cost more than smaller ones, but in studies where reminders are sent to non-respondents, the cost may be offset by a corresponding reduction in the numbers of questionnaires that need to be printed, packed, and mailed for the reminders.
This study shows that monetary incentives can increase response to mailed questionnaires but the relation between the amount of money and response is not linear. For amounts up to $0.50, each additional $0.01 given with a questionnaire can increase the odds of response by about 1%. Each additional $0.01 given in the ranges: $0.50–0.99, $1–1.99, $2–4.99, $5.00 and over, will result in a diminishing marginal increase in response.
What this study adds
This meta-analysis of the best available evidence quantifies the effect on response of giving varying amounts of monetary incentive.
This study confirms that monetary incentives increase mailed questionnaire response and shows that the marginal benefits diminish as the amount of incentive offered increases.
Conflicts of interest: none declared.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.