Article Text
Abstract
Background The regression discontinuity (RD) design permits strong causal inference in non-randomised studies with relatively weak assumptions, yet is not widely known in public health research. Our objectives were to systematically identify how RD designs have been used to evaluate the health effects of interventions or exposures, and to describe the quality of these studies.
Methods We included primary studies that used an RD design to investigate the physical or mental health outcomes of any interventions or exposures in any populations. We searched 32 health, social science, and grey literature databases between 1960-March 2015, with no language restrictions. We searched for “regression discontinuity” or “regression-discontinuity” in title, abstract, index terms, and full text. Bibliographies of review articles on RD and included studies were hand searched for additional references. We adapted the US Department of Education What Works Clearinghouse Standards for RD Designs and evaluated each study against ten criteria to assess risk of bias and statistical design. Using narrative synthesis and a mapping approach, we coded studies according to discipline and produced tables of studies organised by study topic. Descriptive themes related to the forcing variable used and the interventions, exposures and outcomes of each study.
Results Searches identified 3847 citations, of which 2229 were duplicates and 1441 did not meet inclusion criteria, leaving 177 included studies. A broad range of public health policy areas and evaluations were represented as well as questions of clinical effectiveness and epidemiological cause and effect. Commonly used forcing variables were age, income, date of policy change, area-based indicators such as poverty or literacy rates, and clinical measures that act as a threshold for intervention such as birthweight or measures of risk. The ten quality appraisal criteria were fully met by only 5% of the studies. Common issues in study quality included lack of information about study attrition, failure to assess baseline equivalence on covariates, lack of falsification tests, and failure to establish that the forcing variable was unconfounded. Only 8% of studies reported a pre-specified primary outcome or study protocol.
Conclusion This systematic review is the most comprehensive to date on the topic of RD designs and demonstrates that RD is more widely applicable to public health research than previously appreciated. Implementation and reporting often fall short of quality standards for RD designs, which suggests that the potential benefits of this method have not yet been fully realised.