Methods for systematic reviews are well developed for trials, but not for non-experimental or qualitative research. This paper describes the methods developed for reviewing research on people’s perspectives and experiences (“views” studies) alongside trials within a series of reviews on young people’s mental health, physical activity, and healthy eating. Reports of views studies were difficult to locate; could not easily be classified as “qualitative” or “quantitative”; and often failed to meet seven basic methodological reporting standards used in a newly developed quality assessment tool. Synthesising views studies required the adaptation of qualitative analysis techniques. The benefits of bringing together views studies in a systematic way included gaining a greater breadth of perspectives and a deeper understanding of public health issues from the point of view of those targeted by interventions. A systematic approach also aided reflection on study methods that may distort, misrepresent, or fail to pick up people’s views. This methodology is likely to create greater opportunities for people’s own perspectives and experiences to inform policies to promote their health.
- systematic reviews
- qualitative research
- evidence informed policy and practice
- young people
Statistics from Altmetric.com
The context for this paper is using research to inform policy for promoting health, but the issues it raises—about systematic methods for bringing together studies that attempt to understand policy issues from the perspectives of the people they affect (“views” studies)—are important for health and social policy more generally.
Much energy goes into developing strategies for improving public services or tackling social issues. Not all such strategies take into account the experiences and views of those most closely concerned.1,2 There is an emerging consensus that good quality research about people’s views should be used to inform policy alongside studies that describe the problem, investigate factors that are associated with it, and evaluate the effects of interventions to tackle it.3,4 Attention to people’s own perspectives is also advocated in disciplines such as sociology and anthropology.5 Research about people’s views is often, but not always, “qualitative” in nature. As Oakley notes, it is considered by some to represent an alternative paradigm incompatible with the “quantitative” paradigm.6
Literature reviews accumulate learning and avoid the pitfalls of relying on single studies. Systematic reviews apply explicit methods to this task, such as comprehensive searching and the quality assessment of studies. There are therefore good reasons for applying systematic review methods to views studies. Systematic review methodology is well developed for trials,7,8 but the debate about systematic approaches to reviewing non-experimental research is in its early stages,9,10 with a small but growing body of methodological work on the synthesis of qualitative research.11–16 This paper aims to contribute to this debate by reflecting on methods used in a recent series of systematic reviews that included views studies alongside trials. The substantive findings have been reported elsewhere,17–19 as has a description of how the findings of views studies were integrated with the findings of trials.20
DESCRIPTION OF REVIEW SERIES: APPROACH, PROCESSES, AND METHODS
Between 1999 and 2001 we carried out three systematic reviews on the barriers to, and facilitators of, mental health, physical activity, and healthy eating among young people, funded by the Department of Health (England).
Figure 1 illustrates our overall approach. We hypothesised that our review question “What is known about the barriers to, and facilitators of, health and health behaviour among young people?” could be answered by two types of study: (a) “intervention studies” to identify effective, ineffective, and harmful interventions; and (b) “non-intervention” studies that aimed to describe factors associated with mental health, physical activity, and healthy eating. A mapping exercise revealed a large number of primary studies (n = 510). Meetings with project funders and other stakeholders identified subsets of studies for in depth review. For the “non-intervention” studies, a decision was made to focus on UK views studies, published in or after 1990.
The standard stages of a systematic review were used for the in depth review of views studies using specially developed tools. In the rest of this paper we reflect on how well this approach worked and the challenges that remain.
Reflections on the approach
(1) What is a “views” study
(a) Identifying “views” studies
We aimed to identify those studies that placed people’s own voices at the centre of their analysis. A total of 35 studies across the three reviews met this criterion. Some excluded studies did examine views, but used attitude scales and a statistical analysis to trace causal pathways to behaviours. These studies are amenable to meta-analysis.
Reports of views studies were not easy to find or access. On electronic databases we did not search specifically for views studies but combined topic keywords (for example, healthy eating) with population keywords (for example, young people). This meant that we had to sift through large numbers of citations (fig 1). A substantial number were grey literature reports (n = 12), identified by contacts with relevant organisations and authors.
(b) Methods used in “views” studies
Our 35 studies varied in the methods they used (table 1). It was not easy to classify studies according to whether they were “qualitative” or “quantitative”. Some studies collected young people’s views in their own words and then used frequencies to quantify them. Other studies that included both fixed response and open-ended items in questionnaires did not always report results from the latter.
(2) Can we trust the findings of “views” studies
(a) Quality criteria
For qualitative research there is fierce debate about what counts as good quality or whether quality should be a concern at all.21 This contrasts with the situation for trials where there is agreement that randomised comparison groups and concealment of outcome and/or allocation procedures are crucial22 and validated instruments have been developed.23
Quality assessment revealed that views studies fell significantly short of basic methodological standards (table 2). Only four of the 35 studies met all seven criteria (data not shown in table).
The quality assessment tool needs to be further developed. Two reviewers did not always agree on which criteria a study had met. Although reviewers always reached consensus after discussion, more detailed guidance on how to judge whether aspects of a study are “clear”, “explicit”, or “sufficient” is needed.
(b) Using assessments of methodological quality
In systematic reviews of trials, quality assessment is used as a basis for excluding or weighting studies. Quality criteria for trials assess the extent to which their findings can be relied upon to answer questions about the effects of interventions. Choice of quality criteria is therefore driven by the review question.
We did not exclude or weight views studies because there is no consensus about the “right” way to assess the quality of views studies. In our reviews, we were piloting one of many possible sets of criteria. On reflection, we found that their main strength was providing an explicit framework for highlighting the strengths and weaknesses of studies. However, they focused mainly on generic issues of reporting quality, which did not help us to assess studies in relation to our review question—understanding what young people see as the barriers to, and facilitators of, their health behaviour. In future reviews, additional quality criteria are required to assess whether study findings are rooted in young people’s own perspectives.
Our experiences with the 35 views studies highlighted three issues for additional criteria to cover:
Pilot work before finalising data collection tools to ensure that questions and/or response categories are meaningful to young people. Pilot work appeared to be the exception rather than the rule among the views studies (n = 14), and this raises questions about whether findings reflected researchers’ a priori assumptions rather than young people’s own views.
Methods of data analysis. Detail on such methods was rarely given in views studies (see table 2) and it was difficult to tell whether themes were grounded in young people’s views. In some studies pre-defined coding strategies, often derived from interview or focus group schedules, were used to analyse data.
Ensuring the full and active participation of young people in the research. Few studies reported attempts to ensure confidentiality (n = 12) or consent procedures (n = 7). When these features were absent, findings may have only represented what young people were prepared to admit in a potentially uncomfortable research situation.
Attention to these shortcomings and their potential to distort, misrepresent, or simply fail to pick up the views of young people is a key challenge for future qualitative and quantitative views studies. It would be unwise for future systematic reviews of these types of studies to include poor quality studies; this would represent a “double standard” in comparison with systematic reviews of trials.
(3) How can the findings of “views” studies be synthesised
(a) Rendering “views” studies comparable
With the advent of the CONSORT and STARD statements,28,29 trial and diagnostic study reports are more likely to be presented in a standard way. Reports of views studies varied in writing styles and publication formats. Our data extraction tool was essential in helping to “deconstruct” each study. We were then able to “reconstruct” the studies in a standard format, using “evidence” tables (see tables 3 and 4) and structured summaries, to facilitate comparison between them. Two types of evidence tables were prepared and these are illustrated in tables 3 and 4. Structured summaries were between one and two pages in length, elaborating on, and putting into context, the information presented in the evidence tables.
Two researchers reconstructing studies in a standard format meant that at least two members of the review team had in depth knowledge of each study. This was labour intensive but crucial to the success of the synthesis.
The synthesis process was non-linear and involved reviewers going back and forth between the original papers, their data extractions, and the “evidence” tables. We found it useful to draw on the metaphors normally associated with qualitative analysis to describe the process. For example, by rendering the views studies comparable we had “immersed ourselves in the data” as we constructed the synthesis.
(b) Using qualitative analysis techniques for synthesis
In statistical meta-analysis, the way effect sizes are synthesised can be captured by the term “pooling”. Pooling effect sizes is akin to creating one large study to answer a review question. For it to be appropriate to pool findings using statistical meta-analysis, studies must be as similar as possible in the question they try to answer and the methods they use to answer it. Differences in questions and methods used in the views studies meant that their findings were not suitable for pooling in this sense. In our reviews, “aggregating” findings across studies rather than “pooling” was a more useful metaphor for describing synthesis, whereby findings are broken down, interrogated, and then combined into a whole via a listing of themes.31 The methods for synthesis developed iteratively across the three reviews in the series. We worked by using both a priori codes to group studies as well as allowing themes to emerge. There were three main steps:
Step 1: Classifying studies
We classified studies assessing similar aspects of young people’s views. We asked how the findings of the “views” studies could contribute to informing intervention development. Four main issues emerged from the studies included in the mental health review and these areas were subsequently specified a priori in the physical activity and healthy eating reviews: (a) what the terms mental health, physical activity, and healthy eating meant to young people; (b) what stopped young people from being physically active or eating healthily or what made young people feel bad; (c) what helped young people to be physically active, eat healthily, or feel good; and (d) young people’s own ideas about how to promote their mental health, physical activity, or healthy eating.
Step 2: Comparing and contrasting findings
We compared and contrasted findings across studies to identify similarities and differences. For example, in the mental health review, 10 of the included studies examined sources of stress or worry for young people. The most common ones were: school work; physical appearance; choosing and finding a job; lack of material resources; feeling powerless; relationships with friends and wider peer groups; and family discord. When we identified differences in findings, we examined whether these could be explained by the differences in methods or sample characteristics. For example, in the mental health review, differences in preferred coping strategies were explored by age and sex. Young women reported talking to friends or trusted adults as their usual coping strategies, and older young people reported using drugs, alcohol, or physical aggression. In this way we were able to describe the range of views held by young people and highlight those which may be more important for particular groups of young people.
We have developed, and propose for wider use, methods for including non-experimental and “qualitative” research examining people’s perspectives and experiences of health and social issues (“views” studies) in systematic reviews.
Bringing together the findings of quantitative and qualitative views studies in a systematic way can be achieved through the application of conventional systematic review principles and methods (for example, reducing bias, exhaustive searching) supplemented with more novel ones (for example, increasing breadth and depth of understanding as well as reducing bias, qualitative synthesis techniques).
Further research is needed to examine the value of this approach by comparing it with non-systematic reviews with a similar scope. An additional challenge is to examine the impact on the conclusions of a review of including and excluding studies of different methodological quality.
Step 3: Thematic analysis
To answer our over-arching review question about barriers and facilitators we aimed to integrate the findings of the views study synthesis with those from trials of interventions in the three topic areas. Our methods for integrating these are reported elsewhere.20 However, our views studies were the starting point for this integration and to prepare for this we used thematic analysis to identify and group barriers and facilitators. At first, we drew on contemporary models of health promotion that suggested interventions should target barriers and facilitators at the individual, community, and society level.32,33 We struggled to understand the findings of our views studies using these categories. We found that young people talked about what helped and what hindered their health and behaviour within four inter-related “realms” covering: the school; family and friends; the self; and practical and material resources. These emergent categories, which re-aligned the categories suggested by theory to more closely match the issues raised by young people, were used to classify barriers and facilitators (table 5).
(4) Is it worth it?
In terms of breadth, a large number of young people from diverse groups were able to contribute their views. A total of 37 335 young people were accessed across the studies (based on the 33 studies that reported a sample number). We paid particular attention to the characteristics of study samples on key markers of inequalities. Attention to issues of gender across the studies was fairly comprehensive; five studies focused on the views of young men or young women alone, and a further 17 studies looked for differences in views between these two groups (table 6). However, our reviews were limited in their ability to examine how young people’s views were related to their social class and ethnic background because of the scant information provided; 24 of the 35 studies did not report this information. Our reviews were able to highlight these gaps and recommend that primary research be commissioned to address this. The analysis offered a clear message to researchers to describe the social characteristics of their samples more carefully. Of the 37 335 young people included in the studies, the social class of 36 437 or 98% of them was not reported (the same figures for ethnicity were 33 813 or 91%).
Our included studies varied in terms of the depth of their descriptions and analysis. For example, some studies highlighted the range of things that young people identified as barriers and facilitators, others identified the relative importance of different barriers and facilitators for different groups of young people, and some examined why or how different factors acted as barriers and facilitators. For example, in the physical activity review, consistent differences between young men and young women were found across the 16 included “views” studies, with young women reporting lower participation rates and less favourable attitudes. However, insight into why this might be the case was only addressed in four studies.
The contribution that non-experimental and qualitative research can make to evidence informed policy and practice has been limited in the past because of uncertainty about how to include it in systematic reviews.
Advances made in systematic review methods for these study types in a recent review series has created a unique opportunity for young people’s own perspectives and experiences to inform the development, implementation, and evaluation of interventions to promote their health.
Although it seems that issues of breadth are likely to be tackled by quantitative studies and issues of depth by qualitative studies, this was not always the case. Greater breadth was often provided by both smaller scale qualitative studies focusing on particular groups of young people and by large scale quantitative studies focusing on a range of different groups. Greater depth was not always provided by qualitative studies. Some studies of this type described the range of views held, but did not analyse these views further.
CONCLUSIONS AND FUTURE DIRECTIONS
Through a series of reviews in the area of health promotion for young people, we have developed methods for including in systematic reviews non-experimental and “qualitative” research examining people’s perspectives and experiences. We combined conventional systematic review principles and methods with more novel ones developed in the course of the review series. Using the techniques and terminology usually associated with qualitative analysis of primary research data helped us to be systematic and explicit. In addition to the goal of reducing bias, considerations of depth and breadth seem to be at the heart of producing good quality and useful syntheses of views studies. The specific methods for synthesis we developed in our review series have much in common with those recently developed and applied in the substantive areas of nursing and illness experiences.13,35–37 This work has emphasised the theory building potential of synthesis. Our work also makes use of this potential by using young people’s views and experiences to generate theories about which interventions might work to promote their health. The work reported in this paper extends this new genre of syntheses of qualitative research into public health, setting it explicitly within a systematic review framework.
It would be fruitful to test out the methods developed here in other reviews. One question is whether adopting a systematic approach produces different findings from non-systematic reviews. Work is also needed on assessing the quality of views studies. Our experience suggests that we need to move beyond generic criteria about the quality of reporting of methods to assess the extent to which study findings are rooted in people’s own perspectives and experiences. We have suggested three possible issues to consider in the development of such criteria (pilot work for data collection tools; careful use of pre-defined coding schemes for data analysis; and active participation of people in research). Such criteria would need to be empirically tested, alongside the criteria suggested by other groups,5,38 by asking what happens to the conclusions of reviews if we exclude studies that “fail” different sets of quality criteria. The usefulness of any systematic review depends on the quality of studies available. Our review series exposed some serious shortcomings of young people’s views studies and we hope that more reviews of this type will improve the way that these types of studies are designed, implemented, and reported.
The work reported in this paper is part of a programme of work on advancing evidence based health promotion at the EPPI-Centre funded by the Department of Health (England). The views expressed are those of the authors and not necessarily those of the Department of Health.
Conflicts of interest: none declared.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.