There is renewed optimism regarding the use of natural experimental studies to generate evidence as to the effectiveness of population health interventions. Natural experimental studies capitalise on environmental and policy events that alter exposure to certain social, economic or environmental factors that influence health. Natural experimental studies can be useful for examining the impact of changes to ‘upstream’ determinants, which may not be amenable to controlled experiments. However, while natural experiments provide opportunities to generate evidence, they often present certain conceptual and methodological obstacles. Population health interventions that alter the physical or social environment are usually administered broadly across populations and communities. The breadth of these interventions means that variation in exposure, uptake and impact may be complex. Yet many evaluations of natural experiments focus narrowly on identifying suitable ‘exposed’ and ‘unexposed’ populations for comparison. In this paper, we discuss conceptual and analytical issues relating to defining and measuring exposure to interventions in this context, including how recent advances in technology may enable researchers to better understand the nature of population exposure to changes in the built environment. We argue that when it is unclear whether populations are exposed to an intervention, it may be advantageous to supplement traditional impact assessments with observational approaches that investigate differing levels of exposure. We suggest that an improved understanding of changes in exposure will assist the investigation of the impact of complex natural experiments in population health.
- Outcome Research Evaluation
- Environmental epidemiology
- PUBLIC HEALTH
- RESEARCH METHODS
This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: http://creativecommons.org/licenses/by/4.0/
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
- Outcome Research Evaluation
- Environmental epidemiology
- PUBLIC HEALTH
- RESEARCH METHODS
In recent years, researchers have been encouraged to use natural experiments to generate better evidence to fill gaps in population health science.1 ,2 Natural experimental studies help researchers capitalise on ‘events’ that occur outside of their influence (eg, policy changes, economic shocks, natural disasters) and that change the ‘mass determinants’ of health in ways that may be impossible or unethical for researchers to manipulate deliberately.3 ,4 When events occur or are administered by chance, this may allow researchers to emulate the internal validity of randomised trials.5 Although perfect natural randomisation is rare, natural experiments can still be useful for creating comparison groups that are fairly well balanced.1 Where biases exist, a range of methodological and statistical tools have been developed to reduce bias and improve the validity of inferences. Overall, there is renewed optimism that the use of natural experiments can help to unlock answers to challenging questions in population health science.2 ,6
In 2011, the UK Medical Research Council (MRC) published guidelines for producers and users of evidence summarising a broad range of analytical techniques (eg, difference in differences, regression discontinuity, propensity score analysis) that can be employed to evaluate the impact of natural experiments.1 ,7 These approaches largely address the rationale, design elements and attention to validity threats found in randomised trials, catering to certain types of research questions (eg, ‘what works’) and generating certain types of empirical answers (eg, estimates of impact estimates). Such approaches and research questions can be useful where causal chains are short and impacts are large,1 ,6 but may be less useful where complex causal pathways exist, as is often the case in population health interventions. In particular, where complexity exists, it may not be easy to conceive of clearly distinguishable ‘exposed’ and ‘unexposed’ comparison groups.8 Large-scale changes may require further examination: What does ‘exposure’ mean or consist of?; How does it change in response to naturally occurring shocks (eg, economic recessions, policy changes, etc)?; How do changes affect behaviour?; and What degree of change is required to bring about health benefits?9–12 As Diez Roux suggests, where complexity is an issue, ‘simplification can be obfuscating rather than illuminating’.13 ,14 Additional questions are required to illuminate what, how and why changes to social or environmental factors influence health.
This article focuses on one particular part of this puzzle: how can exposure to change be characterised in situations where interventions and exposures are difficult to define, or where human interaction with changing environments is multifaceted. Exposure measurement is currently an active area of innovation and discussion in observational epidemiological studies of place effects on health.10 ,11 ,15–17 Yet, although equally pertinent, discussions about how changes in exposure are conceived and measured are lacking in the evaluation literature. This paper aims to discuss different approaches for characterising exposure in the evaluation of natural experiments in situations in which the traditional use of binary treatment conditions (eg, intervention and control) may not suffice. We situate our analysis in the context of natural experiments that affect the built environment—a topical and challenging area of population health research—although we suggest that these issues are equally relevant to other areas of population health science. We hope to prompt a discussion about how exposure can be conceptualised, measured and incorporated within an evaluative framework for assessing the health impacts of natural experiments.
Conceptualising and measuring exposure in natural experimental studies
Identifying who or what is exposed to any change presents challenges for creating reliable comparison conditions.2 ,18 ,19 Such challenges are not a new consideration for those seeking to draw causal inferences about interventions from observational studies.20 But, in order to use more advanced statistical techniques, one needs an appreciation of what the intervention is, what the exposure categories (treatment conditions) are and the extent of non-compliance.5 Being specific about how populations are exposed to interventions may be difficult where the intervention, or exposure to it, is not rigidly defined. Interventions that change the built environment present further challenges: (1) defining the causal pathways that the intervention should trigger; (2) measuring exposure to an intervention; (3) understanding the variation in exposure intensity and (4) understanding how all of this results in population health impacts.
Defining the intervention and its causal pathway(s)
To measure exposure to an intervention more accurately, it is necessary to conceptualise what changes have been made to the environment and what causal processes these changes trigger that could affect health. With natural experiments, the ‘event’ itself (eg, a financial crash or an earthquake) may not be of central interest. Of prime interest is how the event changes key mediating factors (eg, unemployment, financial insecurity, stress, substance misuse) that affect important health outcomes—the event's ‘function’.21 It may therefore not be important to generalise impact from the trigger event but instead to improve understanding of the function(s) of each natural experiment (ie, homelessness, reduced access to healthcare, decreased safety of environments, etc). Hypothesising how an event functions can help to construct what Ling calls a ‘contribution story’,22 whereby we identify processes and mechanisms through which changes to environmental determinants of health might occur, helping to identify the variables that may result in different exposure for different groups. Conceptualising an intervention and its ‘story’ in this way illuminates a theory of change (how change comes about) and theory of action (how the intervention activates the theory of change), collectively known as programme theory.23
Measuring exposure to the intervention
One of the benefits of developing a programme theory is easier identification of groups that differ in their exposure status and between which valid comparisons can be made. However, in the context of natural experiments, it may prove difficult to find appropriate and reliable data to measure such exposures at the most appropriate unit of analysis. In evaluations of changes to the built environment, the measurement of exposure may depend on different factors, such as the type and nature of the intervention, the outcome of interest and the induction or latency periods between exposure and outcome.24
For example, a deregulation of trade restrictions on fast-food outlets in a city provides the opportunity to test the relationship between availability of unhealthy food and dietary outcomes. A conventional approach might be to define exposure by geographical areas, comparing an ‘exposed’ area where policies were implemented with a ‘comparison’ area resembling the exposed area on baseline characteristics and other key potential confounders. Researchers then might use a quasi-experimental design (eg, difference in differences) to compare postimplementation dietary behaviours between those who live in the two areas. This approach depends on the assumption that, on average, people in the comparison area are not exposed to this change in the fast-food environment. This might not always be the case. Differing lifestyle patterns might mean that people routinely commute to, and spend time in, areas exposed to the ‘experimental’ environmental change. In such cases, using static exposure measures, based purely on residential location, may violate the assumption leading to contamination. Instead, for some populations (eg, the working and mobile), more dynamic measures of exposure may be required to take into consideration routine ‘activity spaces’ and exposure to different environments.9 ,15 ,25 Understanding the fluidity of where and when an intervention begins and ends—spatially as well as temporally—is critical for understanding when dynamic exposures are required.
Differing intensities of exposure
When considering changes to the built environment, there is rarely any clarity about which groups are and are not exposed to an intervention. For example, in some situations, it may be difficult to identify an unexposed population: where the exposure of distant populations cannot be ruled out, or in cases where variation in implementation (and thus exposure) exists. Where this is the case, it may be advantageous to use a ‘graded’ measure of exposure to capture the intensity of the influence of any environmental change. As has been suggested for randomised controlled trials, further process-level research is often required to understand issues of dose, uptake and maintenance in intervention research.26 This is especially important for non-randomised study designs, where processes occurring within the ‘black box’ may hold important answers for explaining differential uptake and effectiveness. Generating more intricate measures of exposure, based on a clear theoretical model, can help to test hypotheses about complex interactions between environments and individuals. This, in turn, will lead to a better understanding of how environmental changes work and for whom.27
Defining exposure in practice
This section offers non-exhaustive examples of how exposure may be characterised in natural experiments where anticipated changes to the built environment are likely to influence health. The following discussion of approaches is organised in ascending order of technical sophistication and data demands. However, less sophisticated approaches may require stronger assumptions that may or may not be justifiable.
Static or hypothetical exposure measurement
In natural experiments, geographic information systems (GIS) are commonly used to characterise exposure and minimise important cultural differences by attempting to identify or create focal local comparison units: groups comparable on observed and unobserved covariates at baseline and from the same locale.28 For practical purposes, researchers often use pre-existing administrative spatial boundaries (eg, zip codes, census tracts, etc) that correspond with the availability of other routine datasets. Where an intervention is believed to affect a specific area, researchers look for reliable comparable units that are unexposed.29 These units may be matched using conventional matching techniques, or researchers may employ methods to create synthetic units.1 ,30 Where an intervention is believed to affect a specific place (ie, address or spatial point)—such as a community park or school—it may be possible to use a concentric boundary or street network buffer zone to specify an areal circumference around the environment of interest (figure 1A). This can be used to collect data on events occurring within the vicinity of the environment (eg, on crime or injuries), or to sample individuals who live near the location of interest.
The use of area-based exposure definitions can be attractive because the analytical requirements are relatively straightforward. However, their use requires several assumptions about the relationship between exposure and the processes that might influence health (see online supplementary appendix A for an expansion on each of these points):
There are reasonable conceptual grounds to believe that proximity to any change in the built environment is central to defining exposure.
The structural change has a ‘zone of influence’ that can be defined and justified at an appropriate spatial scale (with reasonable face validity).
Exposure to the intervention can be treated as being dichotomous (eg, ‘exposed’ in target areas vs ‘unexposed’ outside).
This approach was used in an Australian study evaluating the impact of a walking and cycling trail on physical activity.31 Here, multiple buffer zones were created around access nodes to the new trail to test whether awareness and use of the new infrastructure was greater in areas close by. Area-based units have been used widely across fields, including crime prevention,32 substance misuse,33 physical activity34 and nutrition.35
Individually computed distances
For some research questions, individual measurements can be used to create more specific population exposures. For example, it may not be appropriate to define exposure by assigning all individuals to a single geographic attribute, such as home location—populations may be members of multiple geographic units, whether ‘exposed’ or ‘unexposed’. In addition, it may not be possible to identify an ‘unexposed’ comparison area. Furthermore, exposure may vary considerably within an area, either between individuals or between groups. Where these challenges exist but proximity remains a prominent feature of a programme theory, it may be possible to develop more specific distance-based measures to allow exposure to vary between individuals who occupy the same geographic areas (figure 2). These could be used to generate ordinal or continuous measures of exposure, or could be spatially modelled to create generalised exposure surfaces to help visualise heterogeneity of exposure across space.
In another natural experimental study, of new walking and cycling infrastructure in the UK, the network distance from each participant's home to the nearest access point was taken as a primary measure of exposure.36 These distances were shown to be linearly associated with awareness and use of the intervention and, subsequently, with changes in overall walking, cycling and physical activity. Conceptualising exposure as an ordinal variable had considerable face and predictive validity for this particular intervention.37 This approach involves more complex analytical requirements and also makes a number of assumptions that may not be justifiable given the intervention:
The proximity of the home location to the intervention site (or area) is central to classifying exposure.
A distance-decay effect is predictive of ‘absorbed exposure’ or uptake.
Computed distance-based exposure measures reflect actual or perceived distances.
Individually calibrated exposure
One way of dispensing with the inherent assumptions of the approaches outlined above is by using individually calibrated measures of exposure. With additional information about participants' pre-existing routine behaviours, such as their home and work locations and their modes of transport, it may be possible to generate ‘activity’ or ‘exposure’ spaces that determine whether exposure to a particular environmental change is likely to occur.10 ,15 For example, some individuals may live close to a new urban green space, but recorded ‘activity nodes’ (ie, home and work locations and commute route) indicate that the change to the urban infrastructure is located outside their regular ‘activity space’, which therefore makes exposure less likely. Conversely, other individuals may reside far from the site of an environmental change, but their ‘activity space’ brings them near to it and increases the likelihood of exposure (figure 3).
Activity space modelling was applied in a third natural experimental study of new transport infrastructure, again in the UK.19 Researchers used each study participant's residential and work address to build a model of their quickest route to work. Journey times were calculated for various modes of travel (ie, car, public transport, cycling and walking) before and after the introduction of the new infrastructure, and changes in modelled travel times attributable to the intervention were used to create graded measures of exposure.38 This approach requires much greater technical sophistication. Key assumptions include:
Exposure to an intervention is not solely dependent on residential location.
Relevant exposure is calculated using information about exposure at, and perhaps en route between, certain key conceptually justifiable ‘anchor points’ (eg, home or work).
The intervention's ‘zone of influence’ can be defined and justified at an appropriate spatial scale (with reasonable margin for error).
Dynamic or observed exposure measurement
While individually calibrated measures offer an important insight into exposure that occurs beyond the residential neighbourhood, projections such as these are an imperfect approximation of complex interactions between populations and environmental changes triggered by a natural experiment. Methods are available for capturing exposure based on routine mobility that may provide a more accurate approximation of these important interactions. These methods have typically been used in aetiological research using travel diaries,39 ‘space-time’ budgets,12 GIS-assisted interviews10 ,24 and global positioning systems (GPS).40 Many of these methods require research participants to report their activities retrospectively, thus increasing the possibility of recall bias. However, with the dawn of ‘big data’ and the growth in ownership of handheld GPS devices and mobile location-based services, it may be possible to use real-time spatial tracking applications to define and monitor spatial exposure to environmental changes. Such studies could also incorporate information on real-time perceptions or measurements of health and well-being to better understand the important interactions between individuals and the places in which they spend time.10 ,41 However, it is possible that the use of new technologies to create higher order measures of exposure may not provide immediate clarity. Considerable work will be required to unravel the direction and potential circularity of relationships between environmental changes, exposures and related health behaviours.42
Discussion and conclusion
Natural experiments are becoming an increasingly popular tool to help population health researchers generate better evidence where planned experiments are not possible.1 ,2 One of the key strengths of natural experimental studies is that they use exogenous events to mimic random assignment, helping to create balanced comparison groups on the basis of chance, ‘as-if’ randomised.43 This has been useful for generating unbiased estimates of causal effects in some areas of population health.1 However, there are methodological challenges for using natural experiments in population health. These may limit a study's ability to generate valid estimates of intervention effects, generalise from these estimates or provide a more nuanced understanding of how certain exposures influence health. In studies that examine changes to an environment that may deter or facilitate healthy behaviours, it may not be obvious how an intervention changes the environment, who is exposed to these changes and where any boundary of exposure is located. Such uncertainties may make it difficult to employ more advanced statistical techniques, such as those described in the recent MRC guidance on natural experiments, if the exposure has not been conceptualised in a meaningful way.1 ,7 Questions of great interest to population health scientists may remain unanswered if natural experimental studies are designed with strict adherence to the experimental framework. New methods can provide useful estimates of the magnitude of any population health effect, but explaining why this effect occurred and how it can be replicated in other contexts requires a more systematic approach to understand the processes and mechanisms interacting along the causal pathway.
This is not to say that we discourage the application of the experimental framework or question the utility of natural experiments. On the contrary, we are optimistic about the evolution of opportunistic methods and believe they have a central role for producing better evidence in population health. In this paper, we recommend a more thorough approach to the definition of exposure in the evaluation of large-scale population health interventions, particularly those involving changes to the built environment. All too often research characterises exposure on the basis of either membership of a geographic area in which some environmental variable has changed or proximity of residential location to an environment of interest, such as a new amenity. As a growing literature in observational epidemiology has shown, exposures to health-enabling or preventing environments may be multifaceted, and mobile individuals are exposed to and absorb environmental influences from many places and at different times.10 ,11 As the tools to measure diverse routine environmental exposures advance, we should not ignore the potential implications (and opportunities) these data present for the evaluation of interventions.
What is already known on this subject?
Natural experiments can be used to help understand how changes to aspects of the built environment affect health.
Selection of inappropriate counterfactuals may hamper the evaluation of public health interventions.
Greater understanding of how environmental changes affect exposures that result in changes in health may strengthen causal inference.
What this study adds?
We describe the conceptual and methodological challenges of defining exposure in natural experimental studies.
We outline a range of potential approaches with differing assumptions, technical requirements and implications for causal inference.
More careful consideration of exposure assessment in this way may strengthen public health intervention research.
This work was undertaken by the Centre for Diet and Activity Research (CEDAR), a UKCRC Public Health Research Centre of Excellence. Funding from the British Heart Foundation, Economic and Social Research Council, Medical Research Council, National Institute for Health Research (NIHR) and Wellcome Trust, under the auspices of the UK Clinical Research Collaboration, is gratefully acknowledged.
Funding National Institute for Health Research (grant no. PDF-2012-05-157 and PDF-2010-03-15); UK Clinical Research Collaboration (grant no. RES-590-28-0002). DO is supported by the Medical Research Council (Unit Programme number: MC_UU_12015/6) and AG and JP are supported by NIHR fellowships.
Contributors The idea for this paper originated in discussion between all authors. DKH, DO and JP developed the framework presented in the paper that was refined through discussions with all authors. DKH wrote the article with significant contributions from all other authors.
Disclaimer The views and opinions expressed here are those of the authors and do not necessarily reflect those of NIHR, the NHS or the Department of Health.
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.