Avoiding type III error in program evaluation: Results from a field experiment

https://doi.org/10.1016/0149-7189(80)90042-7Get rights and content

Abstract

A common, yet questionable assumption underlying many evaluations of service intervention programs is that program clients uniformly receive the services purportedly available. The authors draw upon the experience of a randomized field experiment to point out the hazards of that assumption. They found marked differences among clients in the amount of actual service received during participation in the program evaluated. Moreover, the data suggest that program outcomes varied as a function of the amount of service received. These findings are offered as a cautionary note to other evaluators; the amount of service actually received by clients should be accurately recorded and incorporated into the analyses of program outcomes.

References (13)

  • R. Boruch et al.

    Social experiments: A method for planning and evaluating social programs

    (1974)
  • D.T. Campbell et al.

    Experimental and quasi-experimental designs for research

    (1966)
  • T.D. Cook et al.

    The design and analysis of quasi-experiments and true experiments in field settings

  • T.J. Cook et al.

    Working with ex-offenders: The challenge experiment

    (1980)
  • D. Dobson et al.

    Implementing random assignment: A computer-based approach in a field experimental setting

    Evaluation Quarterly

    (August 1979)
  • G.W. Fairweather et al.

    Experimental methods for social policy research

    (1977)
There are more references available in the full text version of this article.

Cited by (188)

View all citing articles on Scopus
View full text