Using an evaluability assessment to select methods for evaluating state technology development programs: the case of the Georgia Research Alliance

https://doi.org/10.1016/S0149-7189(98)00041-XGet rights and content

Abstract

Although increasing attention is being paid to all fields of government program performance, to what extent can the effectiveness of investments in programs involving technology-based economic development be addressed? Questions such as this raise the issue of the evaluability of technology-based economic development programs—the degree to which the particular characteristics of the program affect the ability to provide effective evaluation. This article discusses how an evaluability assessment was conducted of the Georgia Research Alliance (GRA). The article presents the steps involved in conducting an evaluability assessment, including the development of an understanding of the structure and operations of the program, the perspectives of key stakeholders and participants as to potential program impacts and how these might be measured, and the evaluation of technology-based economic development programs in other states. Different methods through which GRA could be evaluated are analyzed and compared.

Introduction

U.S. states have been increasing their investments in technology development programs in recent years. From 1992 to 1995, state investments in university/non-profit centers, joint industry-university research partnerships, direct financing grants, incubators, and near-term assistance programs using science and technology for economic development grew by more than 32%, reaching $405 million in 1995 (Coburn and Berglund, 1995; Berglund, 1998). These state investments are augmented, in most cases, by multiple other funders including the federal government, industry, venture capital, consortia, and private sources.

The 1990s have also been a period in which more attention has been paid to government program performance. Thirty-five states have some type of performance-based budgeting initiative, either through legislation, executive order, or budget agency initiative. The field of technology development has not been immune from this growing desire for performance measurement. A recent survey of such programs found that 95% of states employ methods for collecting performance data or conducting program evaluations. But, despite the prevalence of some type of performance measurement or evaluation efforts among state technology development programs, few states have well-conceived evaluation plans. For example, activity reporting, client survey data, and informal client contact are the most commonly used evaluation methods (Melkers and Cozzens, 1996). More systematic evaluation approaches are less common. Only in part is this due to lack of funding or interest; there are also complex issues about how best to apply evaluation methodologies to assess the often diffuse and indirect effects of technology promotion policies.

This article reports on an evaluability assessment conducted of one of Georgias major technology development programs—the Georgia Research Alliance (GRA). The objective was to examine and identify research approaches and strategies for evaluating the impacts of the Georgia Research Alliance and its associated program investments. This involved interviewing program managers, university administrators, research faculty, private sector partners, and state sponsors to develop an understanding of the structure and operations of GRA and the perspectives of key stakeholders and participants as to potential program impacts and how these might be measured. Information was also gathered about the evaluation of technology-based economic development programs in other states. Drawing on this research, we then analyzed and compared different methods through which GRA could be evaluated.

Section snippets

Methodological approaches to evaluating state technology development programs

Almost every methodological approach employed in the social and behavioral sciences has, at some point, been adapted to the purpose of evaluating technology policies and programs. Indeed, a body of literature has emerged that appraises the experience of using particular methods in the field of technology policy (see, e.g., Meyer-Krahmer, 1988; Evered and Harnett, 1989; Capron 1992; Bozeman and Melkers, 1993; Georghiou, 1995; Capron and van Pottelsberghe de la Potterie, 1997; Piric and Reeve,

The GRA case: methods in practice

The GRA is a collaborative initiative among six research universities in Georgia to use research infrastructure invested in targeted industry areas to generate economic development results (see also the GRAs worldwide web site at http://www.gra.org). Research infrastructure investments in advanced telecommunications, environmental technologies, and human genetics are administered by three centers. GRA has several key programmatic elements. Eminent scholars in each of the three research areas

Conclusions

Our evaluability assessment of GRA highlights the factors that need to be considered when selecting program evaluation approaches. Selection among the array of possible program evaluation methods needs, on the one hand, to consider the methodological strengths and weaknesses of any particular approach, and, on the other, program context, resources, and needs. Program characteristics also greatly affect the ability to conduct an effective evaluation. In the GRA case, these characteristics

References (36)

  • Behn, R., and Vaupel, J. (1982). Quick analysis for busy decision-makers. New York: Basic...
  • Berglund, D. (1998). State funding for cooperative technology programs. Unpublished manuscript. Columbus, OH: Batelle...
  • Bozeman, B. (1993). Peer review and the evaluation of RandD impacts. In B. Bozeman, and J. Melkers (Eds.), Evaluating...
  • Bozeman, B., and Melkers, J. (Eds.). (1993). Evaluating RandD impacts: Methods and practice. Boston, MA:...
  • Brown, M. A., Berry L. G., and Goel R. (1991). Guidelines for successfully transferring government-sponsored...
  • Brown, M. A., Curlee, T. R., and Elliott S. R. (1995). Evaluating technology innovation programs: The use of comparison...
  • Capron, H. (1992). Economic quantitative methods for the evaluation of the impact of RandD programmes. A state of the...
  • Capron, H., and van Pottelsberghe de la Potterie, B. (1997). Public support to RandD programmes: An integrated...
  • Chubin, D., and Hackett, E. (1990). Peerless science: Peer review and U.S. science policy. Albany, NY: State University...
  • Coburn, C., and Berglund, D. (1995). Partnerships: A compendium of state and federal cooperative technology programs....
  • Cook, T., and Campbell, D. (1979). Quasi-experimentation: Design and analysis issues for field settings. Boston:...
  • Cosmos Corporation (1996). A day in the life of the manufacturing partnerships: Case studies of exemplary engagements...
  • Evered, D., and Harnet, S. (Eds.). (1989). The evaluation of scientific research. Chichester, U.K.: John Wiley and...
  • Feller, I., and Anderson, E. (1994). A benefit-cost approach to the evaluation of state technology development...
  • Irvine, J., and Martin, B. (1984). Foresight in science: Picking the winners. London:...
  • Jarmin, R. (1998). Evaluating the impact of manufacturing extension on productivity growth. Journal of Policy Analysis...
  • Georghiou, L. (1995). Research evaluation in European national science and technology systems. Research Evaluation,...
  • Kingsley, G., Bozeman, B., and Coker, K. (1996). Technology transfer and absorption: An RandD value-mapping approach....
  • Cited by (17)

    • Socio-economic impacts and public value of government-funded research: Lessons from four US National Science Foundation initiatives

      2017, Research Policy
      Citation Excerpt :

      We are not arguing here that there is a methods gap (perhaps there is, but that is for another paper), but rather that it would be extremely helpful to develop an impact assessment codex, a set of well-reasoned guidelines about when to employ what approach to assessing social impacts from science and engineering research. To some extent, this approach is reflected in Ruegg and Jordan (2007), Walker et al. (2008), and Youtie et al. (1999). To be sure, this recommendation comes with a major caveat.

    • Evaluability Assessment of an immunization improvement strategy in rural Burkina Faso: Intervention theory versus reality, information need and evaluations

      2011, Evaluation and Program Planning
      Citation Excerpt :

      This can be done by interviewing potential users of the evaluation and exploring their concerns (Dwyer et al., 2003; Leviton, 2003). EA is helpful in determining if and why an evaluation is necessary and in indicating which evaluation method and design is relevant (Cooksy, Gill, & Kelly, 2001; Smith, 1990; Wholey, 2004; Youtie, Bozeman, & Shapira, 1999). This article discusses a specific EA that aimed to assist in deciding if the immunization intervention was ready for evaluation.

    View all citing articles on Scopus
    View full text