Assessing the impact of planned social change

https://doi.org/10.1016/0149-7189(79)90048-XGet rights and content

Abstract

It is a special characteristic of all modern societies that we consciously decide on and plan projects designed to improve our social systems. It is our universal predicament that our projects do not always have their intended effects. Very probably we all share in the experience that often we cannot tell whether the project had any impact at all, so complex is the flux of historical changes that would have been going on anyway, and so many are the other projects that might be expected to modify the same indicators.

It seems inevitable that in most countries this common set of problems, combined with the obvious relevance of social science research procedures, will have generated a methodology and methodological specialists focused on the problem of assessing the impact of planned social change. It is an assumption if this paper that, in spite of differences in the forms of government and approaches to social planning and problem-solving, much of this methodology can be usefully shared — that social project evaluation methodology is one of the fields of science that has enough universality to make scientific sharing mutually beneficial. As a part of this sharing, this paper reports on program impact assessment methodology as it is developing in the United States today.

References (124)

  • J.S. Berliner

    Factory and manager in the U.S.S.R.

    (1957)
  • P.M. Blau

    The dynamics of bureaucracy

    (1963)
  • R.F. Boruch et al.

    Preserving confidentiality in evaluative social research: Intrafile and interfile data analysis

  • G.E.P. Box et al.

    A change in level of non-stationary time series

    Biometrika

    (1965)
  • G.E.P. Box et al.

    Intervention analysis with applications to economic and environmental problems

    Journal of the American Statistical Association

    (1975)
  • G.E.P. Box et al.

    Time-series analysis: Forecasting and control

    (1970)
  • D.T. Campbell

    Pattern matching as an essential in distal knowing

  • D.T. Campbell

    Administrative experimentation, institutional records, and nonreactive measures

  • Donald T. Campbell

    A phenomenology of the other one: Corrigible, hypothetical and critical

  • D.T. Campbell

    Reforms as experiments

    American psychologist

    (1969)
  • D.T. Campbell

    Considering the case against experimental evaluations of social innovations

    Administrative Science Quarterly

    (1970)
  • D.T. Campbell

    Methods for an experimenting society

  • D.T. Campbell

    Experimentation revisited: A conversation with Donald T. Campbell

    Evaluation

    (1973)
  • D.T. Campbell

    Qualitative knowing in action research

  • D.T. Campbell et al.

    Confidentiality-preserving modes of access to files and to interfile exchange for useful statistical analysis

    Evaluation Quarterly

    (1977)
  • D.T. Campbell et al.

    How regression artifacts in quasi-experimental evaluations can mistakenly make compensatory education look harmful

  • D.T. Campbell et al.

    Convergent and discriminant validation by the niultitrait-multimethod matrix

    Psychological Bulletin

    (1959)
  • D.T. Campbell et al.

    Direction-of-wording effects in the relationsliips between scales

    Psychological Bulletin

    (1967)
  • D.T. Campbell et al.

    Experimental and quasi-experimental designs for research

    (1966)
  • V.G. Cicirelli

    The impact of Head Start: An evaluation of the effects of Head Start on children's cognitive and affective development

  • Ross F. Conner

    Selecting a control group: An analysis of the randomization process in twelve social reform programs

    Evaluation Quarterly

    (1977)
  • T.D. Cook et al.

    Sesame Street revisited: A case study in evaluation research

    (1975)
  • T.D. Cook et al.

    The design and conduct of quasi-cxperiments and true experiments in field settings

  • T.D. Cook et al.

    Quasi-experimentation: Design and analysis issues for field settings

    (1979)
  • L.J. Cronbach

    Response sets and test validity

    Evaluation and Psychological Measurement

    (1946)
  • L.J. Cronbach

    Further evidence on response sets and test design

    Educational and Psychological Measurement

    (1950)
  • H.P. David

    Family planning and abortion in the socialist conntires of Central and Eastern Europe

    (1970)
  • H.P. David et al.

    Abortion legislation. The Romanian experience

    Studies in Family Planning

    (1971)
  • S.M. Director

    Underadjustment bias in the evaluation of manpower training

    Evaluation Quarterly

    (1979)
  • J.D. Douglas

    The social meanings of suicide

    (1967)
  • A.L. Edwards

    The social desirability variable in personality assessment and research

    (1957)
  • G.W. Fairweather

    Methods for experimental social innovation

    (1967)
  • G.W. Fairweather et al.

    Experimental methods for social policy research

    (1977)
  • J.L. Fischer

    The uses of Internal Revenue Service data

  • Cited by (329)

    • Attribution of Work in Programming Teams with Git Reporter

      2024, SIGCSE 2024 - Proceedings of the 55th ACM Technical Symposium on Computer Science Education
    • Integrity and the University

      2024, Philosophy of Management
    View all citing articles on Scopus

    Reprinted, with minor revisions and additions, from Gene M. Lyons, (Ed.), Social Research and Public Policies, Hanover, New Hampshire: University Press of New England, 1975, with the permission of the publisher. Preparation of this paper has been supported in part by grants from the Russell Sage Foundation and the National Science Foundation Grants Numbers GSOC-7103704-03 and BNS76-23920. An earlier version of this paper was presented to the Visegrad, Hungary, Conference on Social Psychology, May 5–10, 1974.

    View full text