Article Text

Meta-analysis of randomised trials of monetary incentives and response to mailed questionnaires
1. Phil Edwards,
2. Rachel Cooper,
3. Ian Roberts,
4. Chris Frost
1. Department of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, London, UK
1. Correspondence to:  Dr P Edwards  Department of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, Keppel Street, London WC1E 7HT, UK; phil.edwardslshtm.ac.uk

## Footnotes

• Funding: none.

• Conflicts of interest: none declared.

View Full Text

## Statistics from Altmetric.com

Mailed questionnaires are often used for data collection. When collecting information from large, geographically dispersed populations, the mailed questionnaire may be the only practical and financially viable method available for researchers.1 However, non-response to mailed questionnaires reduces the effective sample size and can introduce bias.2 Strategies that can increase response to mailed questionnaires have been identified and include the use of monetary incentives.3

Several reviews and meta-analyses of studies of the effect of monetary incentives on questionnaire response have been published in the past 30 years,4,5,6,7,8,9,10 but none has been based on a systematic search of the literature. We conducted a meta-analysis of the data obtained from a systematic search of the literature to quantify the increase in response attributable to a monetary incentive.

## METHODS

### Systematic review

A systematic search was initially made for all randomised controlled trials of any method to influence response to a mailed questionnaire.3 We updated the systematic search for trials and included all trials published by February 2003. There was no restriction by language, questionnaire topic, or study population. We searched 14 electronic bibliographic databases, the reference lists of relevant trials, and all issues of two journals in which the largest number of eligible trials had been published (American Journal of Epidemiology and Public Opinion Quarterly). The reports of potentially relevant trials were obtained and two reviewers assessed each for its eligibility. We estimated the sensitivity of the combined search strategy (electronic searching and manual searches of reference lists) by comparing the trials identified with this strategy with those identified by manually searching journals. The authors of eligible trials were contacted by mail or e-mail for any information required for the review that was missing from the published reports, and were also asked whether they knew of unpublished trials.

### Data extracted

Data were extracted from each study on: amount and currency of the monetary incentives used, whether incentives were mailed with questionnaires (“unconditional” incentives) or given to participants after questionnaires had been returned (“conditional” incentives); the year the study was conducted, the numbers of participants randomised, and the numbers who responded. When the years that studies were conducted were unknown they were estimated by subtracting from the year of publication the average delay between study year and publication year calculated from those studies for which this information was available. The amounts of monetary incentive were standardised by converting them to US dollars, and then updating them to present day values using the American Institute for Economic Research’s Cost of Living Calculator.11 To investigate whether pre-specified study characteristics modify the effect of an incentive on response, data were extracted from each report on: the number of pages used in each questionnaire and whether: the questionnaire topic was health related; participants were groups of professionals (for example, physicians); a non-monetary incentive was used in addition to the monetary incentive; the organisation conducting the study was an academic institution; a reply paid envelope was provided; participants were notified in advance of the questionnaire being sent; the questionnaires were sent by special delivery, and whether follow up reminders were sent to non-respondents.

### Statistical methods

For each trial identified we used logistic regression to estimate the odds ratio for response per $0.01 increase in the amount of incentive offered unconditionally and conditionally. Our a priori hypothesis was that there are diminishing marginal gains in response for each additional$0.01 increase in incentive given. We pooled the logistic regression coefficients in a series of random effect meta-analyses stratified according to the minimum and maximum amount offered in each trial: $0,$0.01–0.49, $0.50–0.99,$1.00–1.99, $2.00–4.99,$5.00 and over, and by whether they were conditional or unconditional. Combined odds ratios were calculated within strata as a weighted average of the odds ratios from each trial, using standard errors as weights and incorporating an estimate of the between-study heterogeneity into the weights.12 Heterogeneity among the trial coefficients was assessed with a χ2 test using a 5% level for significance. In the absence of significant statistical heterogeneity, we assessed evidence for selection bias (for example, publication bias) using Egger’s weighted regression method and Begg’s rank correlation test and funnel plot.13

We used the Stata statistical software (StatCorp, College station, TX) to fit a piecewise logistic regression model to describe the relation between response rate and amount of incentive (see appendix available on line http://www.jech.com/supplemental). The regression coefficients in this model estimate the odds ratio for response for each $0.01 increase in incentive in each of the ranges:$0.00–0.49, $0.50–0.99,$1.00–1.99, $2.00–4.99, and$5.00 and over. Standard errors from the model were adjusted using the χ2 goodness of fit statistic to allow for over-dispersion. The model was also extended to investigate interactions between the amount of money offered and the study characteristics, such as whether questionnaire topics were health related.

## RESULTS

The systematic search for randomised controlled trials of methods to influence response to mailed questionnaires yielded a total of 28 994 records of potentially relevant reports. After screening records and obtaining copies of the reports considered being relevant for further inspection, 325 reports were found to contain one or more such trials. Of the 325 reports, 69 included trials of monetary incentives on response (table 1) where a total of 85 671 participants had been randomised. Contact was made with 22 authors and no unpublished trials were identified. Three reports contained the results of two trials and two reports contained the results of three trials and so there were 76 trials in total. Thirty three trials had evaluated two or more alternative amounts of money and the remainder had compared a single amount with no incentive. In addition, 10 trials had factorial designs combining investigation of a monetary incentive with one other factor (nine trials) or two other factors (one trial). In the following analyses each of these was considered as two (or four) separate smaller trials defined according to levels of the other factor(s), giving a total of 88 trials available for analysis. Among the 88 trials, 73 (83%) evaluated unconditional incentives only, six (7%) evaluated conditional incentives only, and nine (10%) evaluated both conditional and unconditional incentives. A total of 79 (90%) studies were conducted by academic institutions, 28 (32%) were known to have studied health related topics, 36 (41%) were known to have specifically targeted professional groups, 50 (57%) were known to have sent reminders to non-respondents, 51 (58%) were known to have included reply paid envelopes, and eight (9%) were known to have also used non-monetary incentives. The studies were published in a range of journals covering marketing, psychological, and medical research, and the average time between trials being conducted and being published was three years. The earliest trial located was published in 1940 and the most recent in 2003. Present values of monetary incentives ranged from $0.04 to$161. There was significant heterogeneity among the odds ratios for response per $0.01 increase in the amount of incentive offered derived from each trial (p<0.00001). Among the 82 trials that evaluated unconditional incentives, 80 (98%) found a positive effect on response, and among the 15 trials of conditional incentives, 14 (93%) found a positive effect. Table 1 Description of studies included in the meta-analysis ### Stratified meta-analyses Table 2 shows the combined odds ratios for response per$0.01 incentive increase among the 82 trials that evaluated unconditional incentives, stratified by the minimum and maximum amounts given in each trial.

Table 2

Odds ratios (95% confidence intervals) for response per $0.01 increase in incentive in 82 trials that evaluated unconditional incentives, stratified by the minimum and maximum amounts given in each trial Figure 1 shows the results for the 74 trials in the five strata in which the minimum amount given was$0. Among trials in which the maximum amount given was less than $0.50 (fig 1(A)—15 trials) the odds of response per$0.01 were increased by 1.2% (95% confidence interval (CI): 0.6% to 1.7%). In the remaining strata, the pooled effect sizes were progressively smaller as the maximum amount given increased. There was significant heterogeneity among the trial results within each of these five strata (p<0.05). Tests for selection bias were not conducted in the remaining strata, as there were too few trials.

Figure 1

Odds ratios (with 95% confidence intervals shown on a log scale) for response per $0.01 increase in incentive given unconditionally, stratified by the minimum and maximum amount given in each trial: (A) min$0 v max $0.00−0.49, (B) min$0 v max $0.50−0.99, (C) min$0 v max $1.00−1.99, (D) min$0 v max $2.00−4.99, (E) min$0 v max $5.00 and over. There were 15 trials that had evaluated conditional incentives (not shown in the figure). When these trials were stratified by the minimum and maximum amount offered, two strata contained more than one trial. There were four trials in which the maximum incentive offered was between$2 and $4.99, and among these trials the odds of response per$0.01 were increased by 0.1% (95% CI: 0.0% to 0.2%). There were eight trials in which the maximum incentive offered was $5 or over, and among these trials the odds of response for each$0.01 increase were almost unchanged (95% CI: 0.0% to 0.1%). There was significant heterogeneity among the trial results within both strata (p<0.05). In these two strata, the increases in the odds of response per $0.01 were lower than those for unconditional incentives. ### Piecewise logistic regression Using the 82 trials that evaluated unconditional incentives, the piecewise logistic regression model estimated the odds ratio associated with a$0.01 increase in each of five incentive ranges. The fitted relation between odds of response and the amount of incentive given is shown in figure 2, with coefficients from the model shown in table 3. Between $0.01 and$0.49, the odds of response per $0.01 were increased by 1.15% (95% CI: 0.72% to 1.58%). The effect of a$0.01 increase above $0.50 was smaller in each successive incentive range, with the effect of an increase between$2 and $5 still achieving statistical significance. Table 3 Odds ratios for response per$0.01 increase in unconditional incentive in each of five incentive ranges, estimated from a piecewise linear logistic regression model

Figure 2

Odds ratios for response according to amount of incentive given unconditionally, estimated from a piecewise linear logistic regression model (95% confidence intervals shown for incentive levels where the gradient of response is allowed to change).

### Policy implications

• This study shows that researchers should include at least a small amount of money with questionnaires rather than give no incentive at all.

• Local research ethics committees, when considering study designs, should be aware that small payments to participants for completion of questionnaires can reduce non-response.

### Implications for the design of mailed data collection strategies

In research using mailed questionnaires to collect data, small monetary incentives may be effective in increasing response compared to offering no incentive at all. The response rates that may be expected at different amounts of incentive given unconditionally are shown in figure 3. Depending on the study resource, small amounts can be offered to participants as tokens of appreciation or larger amounts offered as compensation for their time. Although ethical considerations will need to be taken into account before payments to participants are included in the study design, such inducements should be acceptable if kept small, or when the amount of time and effort required of participants exceeds a certain threshold. Before deciding on the amount of incentive to use, other additional related costs need to be considered, such as the costs of printing, packing, and mailing the questionnaires. Larger incentives cost more than smaller ones, but in studies where reminders are sent to non-respondents, the cost may be offset by a corresponding reduction in the numbers of questionnaires that need to be printed, packed, and mailed for the reminders.

Figure 3

Increases in response rates from different baseline values by amount of money given (estimates based on the results of a piecewise linear logistic regression model).

This study shows that monetary incentives can increase response to mailed questionnaires but the relation between the amount of money and response is not linear. For amounts up to $0.50, each additional$0.01 given with a questionnaire can increase the odds of response by about 1%. Each additional $0.01 given in the ranges:$0.50–0.99, $1–1.99,$2–4.99, \$5.00 and over, will result in a diminishing marginal increase in response.

• This meta-analysis of the best available evidence quantifies the effect on response of giving varying amounts of monetary incentive.

• This study confirms that monetary incentives increase mailed questionnaire response and shows that the marginal benefits diminish as the amount of incentive offered increases.

View Abstract

## Supplementary materials

• The appendix is available as a downloadable PDF (printer friendly file).

Files in this Data Supplement:

• [view PDF] - Appendix: References to studies included in the meta analysis.

## Footnotes

• Funding: none.

• Conflicts of interest: none declared.

## Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.