RT Journal Article SR Electronic T1 Systematic reviews of health effects of social interventions: 2. Best available evidence: how low should you go? JF Journal of Epidemiology and Community Health JO J Epidemiol Community Health FD BMJ Publishing Group Ltd SP 886 OP 892 DO 10.1136/jech.2005.034199 VO 59 IS 10 A1 David Ogilvie A1 Matt Egan A1 Val Hamilton A1 Mark Petticrew YR 2005 UL http://jech.bmj.com/content/59/10/886.abstract AB Study objective: There is little guidance on how to select the best available evidence of health effects of social interventions. The aim of this paper was to assess the implications of setting particular inclusion criteria for evidence synthesis. Design: Analysis of all relevant studies for one systematic review, followed by sensitivity analysis of the effects of selecting studies based on a two dimensional hierarchy of study design and study population. Setting: Case study of a systematic review of the effectiveness of interventions in promoting a population shift from using cars towards walking and cycling. Main results: The distribution of available evidence was skewed. Population level interventions were less likely than individual level interventions to have been studied using the most rigorous study designs; nearly all of the population level evidence would have been missed if only randomised controlled trials had been included. Examining the studies that were excluded did not change the overall conclusions about effectiveness, but did identify additional categories of intervention such as health walks and parking charges that merit further research, and provided evidence to challenge assumptions about the actual effects of progressive urban transport policies. Conclusions: Unthinking adherence to a hierarchy of study design as a means of selecting studies may reduce the value of evidence synthesis and reinforce an “inverse evidence law” whereby the least is known about the effects of interventions most likely to influence whole populations. Producing generalisable estimates of effect sizes is only one possible objective of evidence synthesis. Mapping the available evidence and uncertainty about effects may also be important.