Article Text

Download PDFPDF

How are policy makers using evidence? Models of research utilisation and local NHS policy making
  1. Heather Elliott,
  2. Jennie Popay
  1. National Primary Care Research and Development Centre at Salford, Public Health Research Resource Centre, The University of Salford, Humphrey Booth House, Hulme Place, The Crescent, Salford M5 4 NY
  1. Heather Elliott


STUDY OBJECTIVE This paper is based on a qualitative study that aimed to identify factors that facilitate or impede evidence-based policy making at a local level in the UK National Health Service (NHS). It considers how models of research utilisation drawn from the social sciences map onto empirical evidence from this study.

DESIGN A literature review and case studies of social research projects that were initiated by NHS health authority managers or GP fundholders in one region of the NHS. In depth interviews and document analysis were used.

SETTING One NHS region in England.

PARTICIPANTS Policy makers, GPs and researchers working on each of the social research projects selected as case studies.

MAIN RESULTS The direct influence of research evidence on decision making was tempered by factors such as financial constraints, shifting timescales and decision makers' own experiential knowledge. Research was more likely to impact on policy in indirect ways, including shaping policy debate and mediating dialogue between service providers and users.

CONCLUSIONS The study highlights the role of sustained dialogue between researchers and the users of research in improving the utilisation of research-based evidence in the policy process.

  • evidence-based policy making
  • research/policy interface
  • research utilisation

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

This paper reports how health policy makers understood the role of research and development in their work and how closely prominent models of the research/policy interface approximated the relation in practice. It is based on a qualitative study of local policy makers within the UK National Health Service (NHS).1 2

The study took place in the aftermath of the NHS reforms. Reforming health care was a common goal in many European countries and in the US in the early 1990s, driven largely by concerns to contain costs. These concerns had been prominent for a decade or more as growth in public expenditure had slowed in the wake of the oil crisis in the 1970s.3 Additionally, fears that a publicly funded health service, which was free at the point of delivery, might be swamped by excessive and inappropriate demands had been an ongoing feature of the NHS since its conception.4 5 Another common thread linking the drive towards reform across Europe and US was the aim to make health care more responsive to users.3 The UK reforms were among the most radical and quickly adopted. The relatively rapid implementation was facilitated by the centrally driven nature of the UK system. An internal market was created whereby the functions of purchasing and providing health care were separated.5Local NHS trusts were created to provide health care services, while local health authorities focused on purchasing care and to a lesser extent, pursuing public health objectives. The rationale for this move was that competition among health care providers would improve quality and cost effectiveness of health care by making providers more responsive to the needs of health care users and purchasers.6

Evaluating the impact of the reforms was a difficult task because of their wide sweeping nature, because they were only generally outlined when they were introduced, while the detail was worked “on the hoof” during implementation and because no official evaluation was commissioned. Furthermore, the reforms left a legacy of cultural change and shifts in the distribution of power, which was difficult to measure and assess. However, there is some evidence of gains in efficiency as a result of the reforms, as evaluated by the rather crude measure that increases in activity outstripped increase in expenditure. If anything patient choice decreased, with providers continuing to regard patients as a guaranteed commodity. The split between providing and purchasing functions was considered successful. However, cooperation between those providing and purchasing care was judged to be more effective basis for the relation than the competitive relation the internal market was intended to foster.7 Indeed this was in any case the basis on which the “internal market” had tended to operate because of shortages of information about costs, prices and quality among purchasers and providers,8 monopolies of providers,9 and the tendency for quasi-markets to evolve into long term contractual relations.10

The UK reforms coincided with renewed interest in evidence based health care (EBHC). Definitions of EBHC are contested and evolving. However, the basic tenets are that decisions about healthcare should be based on current best evidence, balanced with individual clinical expertise.4 11 EBHC was particularly influential in the UK where there were concerted moves to ground policymaking as well as practice on evidence. The 1991 R&D strategy was born of concerns that “strongly held beliefs rather than sound information still exert too much influence on health care.”12

EBHC was seen as having a part to play in delivering the cost containment and patient responsiveness agenda of the reforms. It was anticipated that research evidence distinguishing effective from ineffective interventions could form the basis of priority setting for health expenditure.3 4 Similarly, research evidence was also envisaged as being instrumental in making the service more responsive to users and shifting influence from providers to purchasers and users of healthcare by providing an alternative basis for health decision making to clinical expertise. Within the UK health authorities responsible for purchasing care were increasingly required to allocate resources on evidence of effectiveness and on evidence of the needs of communities (“health needs assessments”).

EBHC is the subject of ongoing debate. In particular concerns that “rigour” in evaluating research is too closely identified with with randomised control trials, marginalising research using other designs, have been raised4 13-15 and countered.16 Questions have also been raised about how to handle the value judgements that inevitably inform research and policy making processes,15 how to manage the process of transferring research into practice17 and about how to integrate evidence from research, clinicians and health care users into decision making. These questions about rigour, values and the relevance of research to practice that have come to the fore in the context of the EBHC are global and enduring. This paper analyses how they were dealt with by NHS purchasers and the variety of ways they made use of research to develop locally relevant health care.

Models of research utilisation

The question of what practical use social science might be has been of longstanding interest to both social scientists and policy makers. It is not our intention to review the literature on the research/policy interface—an area that has been thoroughly discussed elsewhere.18 19 Rather, the focus of this paper is on the “problem solving” and “interactive” models of the research utilisation developed by Weiss,20 whose work has formed the backbone of much recent thinking on these issues18 19 21 and on the less well known “dialogical model” of research utilisation developed by Giddens.22Although the dialogical model was developed during the 1980s, it has been relatively neglected since and, unlike Weiss's models, has not been tested empirically.23

The problem solving model proposes that research is used to fill an identified knowledge gap in the policy process. A policy problem is identified, the solution sought through existing research, research in progress or new research and then information transferred from the research arena to the policy and practice arena. This orientation to the research/policy interface assumes a clearly defined place for research, at the heart of the policy making process. The relation between the researcher and the policy maker is one of customer and client. The problem solving model remains one of the most popular and enduring models within policy research.18 19 For example, it underpinned the NHS R&D strategy, as figure 1illustrates.24

Figure 1

Sequence from problem and research solution to implementation and outcome. (From reference 24).

However, questions have been raised about the utility of the problem solving model within scientific and technological research for which it was developed and more particularly for social research/policy interface.18 19 21 The main criticisms raised are that policy problems are often intractable or not clearly enough delineated to be tackled as directly and comprehensively by research as the problem solving model suggests; that research evidence, especially from the social world, is unlikely to be sufficiently clear cut and unambiguous to be translated directly into policy and that the model assumes a more straightforward policy process than is usually the case. In particular, it assumes that existing knowledge is available when a decision is to be made and that the relevant policy maker is able to access and understand it or that all other factors shaping the policy process can be held constant while evidence is located.

In practice, the policy/research interface is usually circuitous and tangled, more akin to the process described in Weiss's interactive model.20 Within this model research is one of several knowledge sources on which policy makers draw in an iterative process of decision making. Other sources include policy makers' own experience, the press, politicians, colleagues and practitioners. Within this model, the influence that research can have on policy making are diffuse, for example, providing decision makers with fresh perspectives and concepts as well as data. Thus the researcher must jockey for a position of influence within the policy process.

It has been argued that the relation between research and knowledge underpinning the problem solving and interactive models is “technological”.22 This orientation towards research, which is heavily influenced by the natural sciences, assumes an immutable subject, which can be investigated in relative isolation from, for example, social and historical contexts. Although the interactive model recognises the contributions of different sources in shaping knowledge and accepts an indirect effect of research on policy, there is some pessimism about the diffuse nature of this impact.19 The elements with which research must interact may be viewed as impositions, circumstances that must be made the best of.

In his dialogical model, Giddens casts interaction in a different light.22 Starting from the stand point that social knowledge is inherently contestable, he argues that knowledge is created through, not despite, interaction. At the heart of human activity lies unpredictability, “ambiguity and possibility of alternative interpretations”.25 Thus the subject of social research is not fixed and may be transformed through study. Knowledge is absorbed into everyday life, appropriated and transformed by lay people to be fed back to those studying the social world.

“Discoveries of social science, if they are of interest at all, cannot remain discoveries for long—the more illuminating they are, the more likely they are to be incorporated into action and thereby to become familiar principles of social life”.23

Thus social knowledge is jointly constructed from the interaction between researchers and others. The “dialogue” underpinning the dialogical model is between “social science” and its subject: the social world.

The implications of this for the relation between research and policy making are threefold. Firstly, the influence that research has on policy makers is most likely to come about through an extended process of communication between researchers, policy makers and lay actors. Secondly, when developing research applications, the contexts within which findings are to be implemented and the need to persuade others of the relevance of research have to be taken into account. Finally, social research has a part to play in interpreting for people in one environment what it is like to inhabit another.19 22


The study that this paper draws upon aimed to gain a better understanding of the interface between R&D and decision making within NHS local health authorities.1 It involved a literature review26 and nine case studies of ongoing or recently completed research and development projects initiated by health authorities and general practitioners with some responsibilities for commissioning health care (fundholders). Given its standing as a method with the potential to illuminate complex, ongoing processes27 a case study approach was chosen to gather the perspectives of different interest groups, consider how the groups interacted and their role in shaping the progress of the case study research initiatives. The project did not involve a formal evaluation of the case studies and we sought to minimise the impact of our research on the case studies followed as much as possible. The project was an exploration of the research and policy processes in order to illuminate ways in which the interface between them might better be developed in the future.


This study was concerned with how policy makers used research that they deemed relevant to their own work. We therefore focused on purchaser initiated research projects and in addition to following these particular pieces of research, used them as a springboard to study how research informed our informants' work more generally. To select the case studies, we drew on data on policy makers' research priorities, derived from a qualitative study undertaken in the region.28 This work highlighted the importance of research to support the new purchasing responsibilities—in particular empirical and methodological research into the needs of populations, research on developing the purchasing process and on monitoring and evaluating the effectiveness of investments. There was strong concern that development activities should not be neglected. Thus within our sample, case studies 1–6 are mainly concerned with assessing health need, while case studies 7–9 relate to monitoring and evaluating effectivenesss. (The case studies are described below.) Many of the policy makers in our sample aimed to use the case study projects to develop replicable models undertaking core purchasing activities. The fieldwork sites were all located within one of the 12 regional health authorities in the UK at the time of the research. The case studies were identified through the network of public health research centres that were the main source of R&D support for health authorities in the region. Additionally the case studies were spread throughout the region to incorporate into the work different organisational structures and characters and differences in approach to the commissioning task.

The case studies are now briefly described. General descriptions only have been given to respect the confidentiality of informants, which might be compromised by fuller details about projects undertaken in a relatively small area within a specified period of time.

Case study initiatives

Case study 1 was a survey of the prevalence of drug use within a locality. In addition a qualitative study of the perspectives of drug users and voluntary and public sector workers on appropriate service development was undertaken.

Case study 2 was an assessment of the health and social care needs of people with disabilities. This research was jointly commissioned by the social services and the health authority and aimed to involves users and carers at all levels and stages of the project and to develop replicable models for user-led health needs assessment. The process involved a survey and qualitative research. Findings were fed back to users and carers and into the social service and health authority planning process.

Case study 3 was a locality-based health needs assessment conducted at a GP fundholding practice. The project developed a practice profile, using a variety of data including patient records and a survey and used these data to decide priority areas for health needs assessment (HNA) in the practice. A GP was funded to lead the assessment and development processes, with research support and health authority input.

Case study 4 was a locality-based health needs assessment, with a focus on primary health care, which formed part of an urban regeneration initiative. It involved a review of existing services and a “rapid appraisal” process29 to assess needs, which fed into the development of a health and social care strategy for the area. The steering group including residents, providers from statutory services and the voluntary sectors. The rapid appraisal process was led by a researcher, with people living in the area, and health authority staff forming part of the research team.

Case study 5 was an assessment of the health and social care needs of mothers with young children. It involved group interviews with mothers and individual interviews with providers of relevant services. The results were disseminated through a project report, local and national workshops and brief guidelines for purchasers. The fieldwork was conducted mainly by a researcher, but some group interviews were carried out by health authority staff.

Case study 6 was an initiative to develop existing knowledge into best practice guidelines for commissioners and providers using decision analysis and to monitor the influence of guidelines on decision making. In the course of the project, the emphasis moved from producing written guidelines to developing techniques for improving decision making in general and promoting dialogue within and between purchasing and provider organisations.

Case study 7 tested an instrument to assess the effectiveness of service investments, developed by a consultant in public health. The instrument comprised 10 open questions, covering policy background, local relevance, investment specific information, the practicalities of implementation, measures of success and monitoring arrangements. The instrument was applied to three different investment initiatives and evaluated using in depth interviews.

Case study 8 was a review of activity data and effectiveness literature on a surgical intervention, which fed into health authority guidelines.

Case study 9 involved monitoring and evaluating the effectiveness of a service investment, using outcome measures jointly agreed by health care providers and purchasers. One of the aims was to develop a replicable model of setting and monitoring outcomes. A researcher facilitated the process and provided guidance on methods of outcomes assessment.


Fieldwork took the form of in depth interviews with lead health authority managers and researchers and with others identified by these informants as being involved in the research process, such as local authority managers and community development workers. In addition project documentation such as research proposals, interim and final reports was consulted. This provided invaluable background information, enabling us to trace the history of often complicated projects as well as offering official versions of the projects to set alongside stakeholders' versions. On occasion, researchers were invited to observe development and dissemination meetings, which provided data on interactions between different stakeholders and also which findings from research studies were prioritised. Interviews lasted approximately one hour. A total of 28 interviews were conducted; 22 by the one research officer (HE), the remaining six by two senior research officers based at the Public Health Research and Resource Centre.

Investigation into how policy makers “used” research was carried out both through gathering data on the case study initiatives and also by using the case studies as a starting point for discussions about how policy makers viewed the role of R&D in their work more generally. In this respect, the case studies served as a useful foundation, in that discussions about potential and appropriate roles for research in policy making were rooted in the experience of initiatives underway or recently completed.

Given that qualitative research seeks to uncover subjective meaning that experiences hold for a particular groups,30 the validity of our findings were judged in terms of whether they “made sense” to members of the group studied.31 To this end, findings were fed back to informants and a “Think Tank” including senior NHS managers and academics was convened to provide feedback and commentary on our data and interpretations. A further test of the validity is how qualitative research relates to existing literature in the field.31 The relation between our data and the literature on the models of research utilisation (see above) is discussed in the following section.


Our informants were—explicitly or implicitly—operating with models of how research should be used, against which they measured the initiatives they were involved in. Underpinning many of the interviews with managers were expectations of what a successful R&D initiative would result in visible and direct change. One purchasing manager was initially reluctant to talk about a research initiative she had been involved in because:

“there are other areas where we've had a more demonstrable return...At certain times (during the project), I was trying to speed it up or to tighten it so that it actually delivered something that was demonstrable and would lead to change in the way resources were used”.

Most of the initiatives were commissioned in response to information gaps. The following quotation illustrates the kind of operational questions facing policy makers and conveys their sense of urgency in addressing them.

“there was the realisation that we did not have the information we needed to make decisions... it wasn't intellectual ... it was more like how do we find out how to use this money? How many nursing beds shall we buy?”

However, it was recognised that directly answering such questions was beyond the scope of a research project. Decisions about, for example, how many nursing beds to commission, were seen to be contingent on wider issues such as finances available, the relative merit of alternative services, and the preferences of various stakeholders.

key points
  • Factors such as financial constraints, shifting timescales and decision makers' own experiential tempered the direct influence of research evidence on decision making.

  • Research was more likely to impact on policy in indirect ways, including shaping policy debate and mediating policy makers' dialogue with health service providers and users.

  • Sustained dialogue between researchers and the users of research iincreases the use of research-based evidence in policy.

It was generally felt that research could clarify and contribute to decision making but not provide answers. There were exceptions. For example, the manager who commissioned the review of activity and literature on a surgical intervention described in case study 8 stated that the work did feed directly into policy guidance and he outlined a clear, linear route from the identification of the problem, framing the problem through research process to implementing findings. However, in general research was described as one of several sources of information policy makers drew on when making decisions, some of which they sought out, while others were imposed upon them.

“You've got to take your purchasing decisions on the best information and research you can get. But in the end they're dictated by corporate priorities, financial priorities and hopefully informed by that kind of independent viewpoint on what you knew”

“I mean so much is to do with contingency. You know so much of it's to do with the surrounding situation at the time.”

Although the need to be responsive to external influences was often problematic, decision making through weighing up of different interests—and the contributory role of research within this process—was felt to be appropriate. One policy maker who had been involved in data collection described some discomfort about becoming “too close” to users' perspectives when he felt they were just one of several factors that had to be taken into account.

“I might become too close and just become a little bit too confused in my own mind as to what we were actually trying to do really...we became advocates for the users themselves rather than necessarily taking their views into account, as part of a whole, with you know, professional views and sort of, epidemiological or public health perspectives which, I think, is necessary.”

Within this environment of competing pressures, research findings would not necessarily be listened to as a matter of course. Some informants “championed” their research among those who were likely to be influential in implementing research findings, both within purchasing organisations and beyond. For example, case study 2 included interviews with senior managers in health and local authorities to bring the research findings to their attention, as well as to gather their perspectives. Similarly, the thorough strategies developed to incorporate users' views into the project was fuelled by concerns that the project was credible among an articulate and well organised group of service users in addition to commitment to user involvement. There was some awareness of the slipperiness of the relation between research recommendations, policy and practice.

“There's a danger that the strategy says something which is actually inconsistent with what the research said. And I think that may be legitimate but in most cases, it happens by accident rather than design”.

Informants could be circumspect about the apparently direct impact research had had on policy. Although recommendations from case study 2 fed into the planning cycles for local and health authorities, informants reserved judgment on the success of the initiative, insisting that the degree to which the problem had been solved should be evaluated in terms of the impact the work had on the lives of people researched rather than its visibility in policy documents.

“The process will only be successful if it results in change—and its too early to say”.

Informants were aware of the indirect impact that research findings could have on policy problems. Involvement in research process could offer opportunities to develop relationships with users and providers.

“It's allowing a common language—clinical people talk differently about things than managers—we've found a common way of looking at things.”

“our research resource centre is plumb in the middle of that (the relationship between the health authority and provider organisations) plugged into needs assessment and the contracting process—most importantly acting as an interface between the two”.

The research process was on occasion a relatively neutral site for policy makers to negotiate with other interested parties.

“it's having someone who is external from all the organisations, with no vested interest and no reason to placate or irritate that I think is quite useful”.

In addition, research could offer insights into the preferences of those not routinely included in decision making such as health care users.

“That you'd actually heard (users) say these things, you know, that made it more real and you knew what it meant when you read the headline, the one sentence summary.”

However, there were instances where consultation had been unsatisfactory. Although many of the informants were committed to user involvement and the case studies did have wide consultation processes at the outset, consultation could be funnelled into a search for answers to policy questions as planning cycle deadlines approached. Some informants expressed concern that if users were not involved in ongoing consultations, their voices would be distorted or drowned out by financial considerations, organisational politics or lobbying by other interest groups. As a community development worker pointed out, this could lead to some cynicism among user groups about participating in research.

“everybody ... has been consulted 50 million times about everything, and they're sick to death of being consulted and very often feeling that even though they've been consulted, they haven't necessarily got what they wanted.”

Research also had a part to play in making explicit value judgements that might otherwise be taken for granted.

“if it wasn't for someone critical like [researcher's name] having been there everyone would have said ‘Yes, yes’ ... we had to stop and think what we meant.”

Additionally, managers highlighted the value of ongoing dialogue with researchers on both the area under investigation and wider issues. Researchers were valued as “knowledgable outsiders” who were able to offer a critical commentary on practice.

“it's a more questioning perspective (which researchers have) about the service rather than a sort of ‘we’ve always done it so it must be right'. Which is the perspective some of us tend to have.”

However, this critical perspective was only acceptable in the context of in depth knowledge of the health service and an understanding of the constraints policy makers operated under. Striking a balance between empathy and independence required fine judgement. One manager described the contribution that research could make to policy makers as “an independent viewpoint on what we already knew”, pointing to the possibility of research fulfilling a merely legitimising role within an over-cosy relationship between researcher and policy maker.

Moreover, the credibility, confidence and experience researchers require to act as “knowledgeable outsiders” are qualities that are likely to develop over time in the context of ongoing dialogue between the policy and research worlds. This could be hard to achieve in the working environments evident in the case studies. Long term collaboration between researchers and policy makers was hampered by the short-term nature of research work—most of which was undertaken by contract researchers or research consultants—and high turnover among health authority staff. There was a feeling among some managers that the contract researchers responsible for much of the work undertaken had not had the chance to develop a broader, critical perspective they valued.

“While they (project based researchers) have been good at the operational research they've been commissioned to do, they haven't necessarily had the breadth of knowledge to be able to advise about other issues.”


The contribution of research to policy making was less central and more subtle than the problem solving model implies and was captured more fully by the interactive model.

Within our study research played a variety of parts, ranging from providing perspectives and indeed “answers” to immediate policy questions, illuminating wider policy issues, developing new purchasing roles and negotiating relationships with users and providers. The “developmental” aspects of the research process were particularly striking, reflecting perhaps the organisational change and development of new roles that characterised our sample at the time of study.

The principle of joint interpretation underpinning the dialogical model resonated with concerns of some of our informants to develop a negotiated and locally sensitive understanding of health need. Additionally, some aimed to build ongoing “dialogical” relationships with researchers to reflect on practice as well as develop policy from specific research projects. However, dialogue and joint interpretation were often goals to aspire to rather than an accurate reflection of practice. “Dialogue” could be experienced as confrontational or fraught—the study revealed instances where interpretations clashed and consensus could not be reached. Furthermore, the culture of short-term research contracts and high job mobility among NHS managers mitigated against ongoing process of interpretation involving researchers, as well as the development of long term collaboration based on development of mutual trust and experience of the NHS.

When it came to making immediate policy decisions, a range of evidence sources were drawn on including budgetary constraints and competing claims on resources, experiential knowledge and national and local policy guidance. This need to weigh up different interests and the value judgements involved in bringing evidence to bear on policy are a fundamental aspect of policy makers' jobs not accounted for by the problem solving model.23 The words of William Gorham, describing why a major North American social science initiative in the mid-1960s had “failed” to produce the expected impact on policy and social life, illuminate this point well.

“No amount of analysis is going to tell us whether the nation benefits more from sending a slum child to preschool, providing healthcare for an old man or enabling a disabled housewife to resume her normal activities. The grand decisions—how much health; how much education; how much welfare and which groups of the population shall benefit—are questions of value judgements and politics”.32

This is not to suggest that analysis is redundant. Clearly it is important to map within each category of need what interventions are most helpful, feasible and acceptable and what they might cost. However, such data are open to multiple interpretations, which are shaped by the personal and professional values of the interpreter and by the social contexts within which research findings are to be applied. As the study reported here indicated, among the contributions that research can make to the policy process is the facilitation of new interpretations of research findings and uncovering the assumptions underpinning them.

Over emphasis on the “problem solving” capacities of research may mask the responsibility that policy makers—and indeed citizens—have for decisions about the allocation of social resources. Furthermore, it can raise unrealistic expectations about the direct influences of research while underestimating less tangible benefits.


We would like to thank Ursula Harries and Alan Higgins, who conducted the study with us, for their support throughout the preparation of this paper. Chris Bryant, Helen Busby, Iain Chalmers and Andrew Long also commented on this paper and we are grateful for their thoughtful input. Finally we would like to thank again all those people involved in the case studies who agreed to be interviewed and to share their experiences and concerns.



  • Funding: the study reported in this paper was funded by the NHS North West R&D Directorate.

  • Conflicts of interest: none.