The Impact of Social Policy
Pranab Chatterjee and Diwakar Vadapalli
‘”I·-\ere is increasing recognition today that social policies and programs should be carefully evaluated to determine whether they do, in fact,
meet their stated objectives. Althongh it has often been assumed that social policies have a positive impact, this assumption has been called into question by many critics of government social programs. This chapter discusses the ways in which the impact of social policies can be assessed. It describes the principles and techniques used in different types of evaluation. Although evaluation research has become increasingly sophisticated, values and ideolo gies continue to play an important role in deciding which policy approaches work best.
___________The logic of Bmpact Analysis
Rossi and Freeman (1985, 1993) and Rossi, Lipsey, and Freeman (2004) observe that there are four phases of social policy evaluation. These are needs assessment, selection of a program to respond to needs, impact evalu ation, and cost-benefit analysis. Upon outlining the four phases of evalua tion, they discuss many experimental, quasi-experimental, and time-series designs that can be used for program evaluation. Mohr (1995) singled out the idea of impact evaluation and called it an attempt to isolate the direct effects of a policy (or, more precisely, a program derived from a policy) apart from any confounding environmental effects. Earlier, Suchman (1967) sng gested tbat a program is a form of social experiment, and any evaluation of it leads to the conclusion tbat the program does or does not produce given social ends. Following Suchrnan’s ideas, Riecken and Baruch (1974) listed ways of evaluating the impact of social experiments, many of which can be construed as preludes to new forms of social policy. Schalock (2001), using
I THE NATURE OF SOCIAL POLICY
T these contributions, defined outcome-based evaluation as evaluation that uses valued and objective person- and organization-referenced outcon1es to analyze a program’s effectiveness, impact, or efficiency. Snchman (1967), in his earlier work, had stated that, if any one program does not produce given social ends, one should conclude that it is a case of program failure.
However, if it seems that a substantial number of programs, all similar in nature, do not produce given ends, then it indicates a case of theory fail ure. In other words, the theory that generated the programs (as interven tions to bring about a change) has been developed on faulty premises.
The groundwork of Rossi and Suchman on impact analysis (both of social programs and of the parent policies or theories on which they rest) is based on the assumption that quantitative analysis and multivariate design will produce knowledge about the impact of social policies and programs. Ask a typical policy analyst, academic, or program-administrator about the impact of social policy and one will be provided with a sheaf of statistics supporting one perspective or another. This has come to be regarded as not just natural but the most appropriate response. On the matter of overem phasis on numbers, Zerbe (1998) has observed, “Hard numbers drive out soft” (p. 429).
Designing Impact Analysis: Some Issues
For example, an intervention designed to. improve economic conditions in an urban neighborhood might well appear to be very successful, until one becomes aware that a regional upturn in the economy has occurred throughout the evaluation period and, although the neighborhood economy is much improved, it has, in fact, lagged far behind the rapid growth evi dent across the rest of the region. To reliably sort out program effects from environmental and other confounding influences is a daunting task. The significant achievements of impact analysis have been to spark an aware ness of the need for such an analysis if program effectiveness is e’ver to be convincingly established and to offer a cookbook of strategies for attempt ing to achieve valid statistical evaluations.
Perhaps the best case for the use of quasi-experimental designs in impact analysis was made by Campbell (1969), when he proposed that the evalua tion of a policy in one state, province, or country is possible by comparing the posttests in two nearly identical states, provinces, or countries, where one has experienced a policy and the other has not. However, this form of impact analysis often results in doing two case studies, which defeats the entire purpose of quasi-experimentation with valid samples and controls.
Impact analysis typically suggests a spectrum of research approaches from experimental to quasi-experimental and strongly recommends that the evaluator stick as closely to the classic controlled experiment and sta tistical analysis strategies as possible. Of course, it is rarely possible to approach these conditions in social experimentation and evaluation, so the
85 6. The Impact of Social Policy
main thrust of an impact analysis is on quasi-experimental strategies and somewhat less powerful statistical analyses. The notion of qualitative strategies is usually dismissed as neither rigorous nor practical enough to warrant consideration.
The product resulting from an impact analysis is a methodologically and statistically sophisticated document detailing relationships and relative levels of importance among a number of variables. In keeping with the values of scieuce in the modern age, it is widely accepted that rigorous attention to methodological and statistical norms will produce an objective analysis of the program under study, so the resulting information may safely be used to determine the fate of a particular policy and the fates of all those stakeholders upon which it has an impact.
Information of this statistical kind _has become the lingua franca of deci sion makers for many easily appreciated reasons. It is a manageable way to consider very large numbers, whether dollars or populations. It appears to offer a nearly irrefutable assessment, apparently devoid ofbias or ideology. It is not presented as personal or emotional and is perceived as dispassion ate and objective. It does indeed provide one of the most useful approxi mations available of valid grounds for a judgment as to the effect of a particular policy or program. And it allows for the easy flow of info1ma tion from one venue to another, for example, from the budget office to the program designers to states, counties, and beyond.
The methodological and statistical achievements of impact analysis have, howevei; contributed to certain strategies for the evaluation of policy (Rossi & Freeman, 1985). Crane (1982) suggested that a useful impact analysis clearly depends on the formulation of evaluative hypotheses, which may take the following form:
Null hypothesis: The trne effect is zero.
Alternative hypothesis: The true effect is at least equal to the threshold effect. (pp. 86-88)
Mohr (1995), using Deniston’s ideas (1972a, 1972b), listed further ele ments of impact evaluation, when he defined “a problem relative to a given . [policy or Jprogram as some predicted condition that will be unsatisfactory without the intervention of the program and satisfactory, or at least more acceptable, given the program’s intervention” (p. 14). Yet, troubling ques tions persist. In order to be statistically malleable, complex phenomena must be reduced to measurable form. If this is not possible (as it frequently is not) indicators must be developed. That is, one kind of information must be made to substitute for another. So, for example, educational level or occu pational title is often used as a proxy for income or socioeconomic status because respondents to surveys are loath to reveal their actual earnings. One potential difficulty arises when indicators are used not as indicators but as actual measures. The potential for misunderstanding inherent in the process
86 THE NATURE OF SOCIAL POLICY
is important enough that it has led to the establishment of nationwide pan els charged with the production of increasingly more reliable indicators.
For example, there are no existing measures that are called “measures of the impact of social policy.” However, the Human Development Index (HD!), developed ‘by the United Nations, can be used as a somewhat direct measure of conditions in a society, and it can be then speculated whether one or another of these conditions is the outcome of a certain l<ind of social policy. The HDI represents three equally weighted indicators of the quality of human life: longevity, as shown by life expectancy at birth; knowledge, as shown by adult literacy and mean years of schooling; and income, as pur chasing power parity ‘dollars per capita (United Nations Development Programme, 1994, pp. 108, 220). Using 0.875 as a boundary, 35 states from Canada through Portugal could be said to rate high as welfare soci eties in 1993. In contrast, 10 countries from the former planned economies of Eastern Europe can be seen to rate below the 0.875 threshold. The data are presented in Table 6.1.
The data from Table 6.1 can now be placed iu the design parameters shown in Table 6.2. In this posttest only design, the impact of economic and social policies show that market-oriented policies produce higher HDI levels than planning-oriented policies.
The data presented in Table 6.1 are standardized data and can be placed in most any design parameters. Statistical impact analysis works well with such data.
Perhaps a “better” form of impact analysis would emerge if the Human Development Index measures were available for two different times (e.g., Time 1 and Time 2), and then one could see where planned and market economies were in Time 1 and whether, at Time 2, the gain or loss of planned economies is greater or less than those of market economy soci eties. This better design would be called a pretest-posttest design (Campbell & Stanley, 1963; Cook & Campbell, 1979; Mohr, 1995).
A yet better design for impact analysis would emerge if prestest-postest measures were available for societies that were comparable to the societies described in Table 6.1 in Time 1 but that did not experience industrial devel opment to the same extent as tbe market economy societies and planned economy societies did. Actually, this effort can be simulated by going back to the posttest design (as shown in Table 6.2). Take, for example, the case of Afghanistan, wbich has not experienced any form of industrialization or social policy, and note its HDI level in 1993 (which is 0.229). Then, consider the ethnically similar neighboring societies of Uzbekistan or Turkmenistan or Tajikistan and look at their HDI levels in 1993, which are 0.679, 0.695, and 0.616, respectively, as reported by United Nations Development Programme (1996). Such data can be grouped together for a posttest design (as shown in Table 6.3) to see the impact of economic and social policies driven by forced industrialization and planned economy.