Study of Instructional Improvement

Concept of Comprehensive School Reform

During the 1990s, the movement toward comprehensive school reform arguably became the “poster child” for scientifically-based reform in American education, having been supported initially by business leaders and philanthropists, and then by the Comprehensive School Reform Demonstration Act, and more currently by Part F of No Child Left Behind, which gives states funding (subject to availability) to award competitive grants to local schools to facilitate adoption of CSR programs locally. Representative David Obey (D-WI), who co-sponsored the first federal bill supporting comprehensive school reform, called this movement “the most important education reform effort since Title I because CSR programs “give local schools the tools…they [need to] raise student performance to … high standards” (Congressional Record, 1997).

Interestingly, the CSR movement was not the creation of the federal government. Rather, it was first initiated in 1991 by a private, not-for profit organization known as the New American Schools Development Corporation (NASDC). Founded as part of President George H.W. Bush’s America 2000 initiative, NASDC (later re-named New American Schools [NAS]) provided the kinds of venture philanthropy and political capital that were needed to catapult comprehensive school reform to national prominence. Under the leadership of David Kearns, Chairman emeritus of the Xerox Corporation and a former Deputy Secretary of Education, NAS raised more than $130 million in contributions from the nation’s top businesses and foundations with the explicit goal of fostering what it called “a new generation of American schools.” Researchers at the RAND Corporation who studied NAS during this key period reported that the organization’s “core premise was that all high quality schools possess, de facto, a unifying design that…integrates research-based practices into a coherent and mutually-reinforcing set of effective approaches to teaching and learning” (Berends et al., 2002: xv).

To make this core idea a reality, NAS funded the development of several new, “break the mold” designs for school improvement that it called “whole-school reforms.” After selecting 11 organizations from a competitive request for proposals responded to by over 600 applicants, NAS began its work in 1992 with a one-year development phase, during which time the selected organizations (known as “design teams”) created new designs for schooling. This was followed by a two-year demonstration phase, during which these organizations worked in a small number of demonstration sites to get their new designs implemented, and then by a five-year scale-up phase in which the organizations worked in a larger set of school districts chosen by NAS to get the new designs implemented even more broadly.

Although the NAS scale-up effort met with uneven success (only 7 of the original 11 design teams made it out of the scale-up phase), NAS nevertheless gave a tremendous boost to the idea of school improvement by design. In 1997, for example, when the NAS scale-up phase ended, the surviving NAS design teams were working with over 685 schools around the country. Then, with federal funding from the Comprehensive School Reform Demonstration Act, and later from Part F of NCLB, nearly 7,000 schools across the country adopted CSR designs provided by well over 600 different organizations. By any count, this was a remarkable rate of uptake for an educational innovation. In a few short years, roughly 10% of all public schools in the United States had adopted a CSR design, more than twice the number of schools that were operating as charter schools during the same time period.

Previous Research on CSR Programs

Research on the design, implementation, and instructional effectiveness of comprehensive school reform (CSR) programs, however, reflects familiar themes in research on previous education reform efforts in the United States. Like past reforms, the CSR movement began when an influential and dedicated group of reformers (in this case business and government leaders) succeeded in promoting (and, through legislation, institutionalizing) a new template for school improvement. This new template then diffused widely and quickly through the education system, as several thousand schools adopted one or another CSR program. But, while adoption of CSR programs was seemingly quick and easy, implementation at local sites turned out to be difficult (Bodilly, 1996; Berends, Bodilly, and Kirby, 2002; Desimone, 2002; Mirel, 1994), and program evaluations gradually uncovered a pattern of weak effects on the reform’s intended goal—to improve the academic achievement of students (Borman et. al., 2003). Consequently, enthusiasm for the new reform strategy faded, and American education policy veered away (again) from what was once considered a promising approach to school reform in order to find a new magic bullet for improving schools.

There is a side-note to this story, however. First, while a meta-analysis of CSR program evaluations conducted by Borman et al. (2003) showed that CSR program effects on student achievement were quite small on average (Cohen’s dsd = .12 in comparison group studies), the analysis demonstrated that a great deal of program-to-program variability was present in observed effect sizes (with Cohen’s dsd varying from -.13 to +.92 in comparison group studies). Thus, some CSR programs apparently worked much better than others in improving academic achievement, a common finding in evaluations of externally-designed school improvement programs dating to the earliest evaluations of Follow Through designs (see, for example, House et al., 1978; Gersten, 1984).

Here, our central objective is to develop an explanation for the variable effects on student achievement that occur when schools embrace design-based, instructional improvement programs. Previous research on comprehensive school reform tended to focus on three determinants of program success: (1) the nature of the problem being addressed by a social policy or program (e.g., the problem’s inherent complexity or uncertainty); (2) the nature of the program itself (e.g., features of the program’s design); and (3) the social context in which the intervention or policy change is attempted (e.g., the policy environment in which change is attempted, the motivation and skill of personnel implementing the program, and the organizational culture, climate, and authority structure under which implementing personnel work). By holding constant the problem being addressed by the CSR programs we studied (i.e., instructional improvement), and by limiting the social context in which these CSR programs operate (to matched samples of elementary schools), our work focuses on an examination of program designs as the key factor explaining program outcomes.

This design-based explanation assumes that effective programs resolve two problems of intervention simultaneously. First, organizations providing design-based assistance to schools cannot succeed in raising student achievement unless their designs for instructional practice are different from (and more effective than) existing instructional practices. This statement represents the old adage that if you keep on doing the same old things, you cannot expect to change outcomes. Second, building a CSR program around an effective instructional design does not guarantee improved student learning unless there is an effective strategy for getting that instructional design implemented in schools. Simply stated, an effective externally-developed program works when it is built around an effective instructional design and a sound implementation strategy. From this perspective, school improvement by design works under limited circumstance and can go wrong in several ways. Programs can fail, for instance, if they are built around an instructional design that is more effective (in principle) than current practice, but has a poor design for implementation. Conversely, a program can fail if it has a very strong design for program implementation but is built around a weak and ineffective instructional design. Finally, the worst case scenario is an external program built around poor ideas about both instruction and implementation. From this perspective, building an effective design is difficult and requires attention to both instructional design and implementation support.

While this basic idea seems obvious, it is worth noting that much prior research on design-based instructional improvement has failed to gather data simultaneously on these twin issues of instructional design and implementation. For instance, consider the large body of research on curriculum development projects supported by the National Science Foundation (NSF) in the 1960’s—arguably America’s first attempt at large-scale school improvement by design. Research on this educational reform effort often took for granted that these NSF-supported curricular designs were more effective than existing materials, especially as the more innovative curricula were developed by universities and prestigious not-for-profit organizations. A major finding from this body of this research was that few NSF curricula were implemented with any fidelity at all (for reviews of this literature, see Welch, 1969; Darling-Hammond and Snyder, 1992; Elmore, 1996). Obviously, this is an important finding, but it does not tell us whether the new curricula—if well implemented—would have improved student outcomes. So, we in fact, gained only partial information about the process of school improvement by design from this research.

A somewhat different problem plagued the next generation of research on innovative programs. Consider, for example, the so-called “planned variation” experiment designed to evaluate alternative Follow Through designs. In this work, researchers focused on measuring student outcomes, but as a result of funding problems, failed to collect measures of program implementation (House, Glass, McLean and Walker, 1978). A major finding in this research was that there was great variability in effects on student outcomes across different Follow Through designs. Explanations, however, for this finding were the subject of a huge debate, largely because researchers could no identify variability in program effects as due to differences in the effectiveness of the instructional designs across programs themselves, or differences in the ability of program designers to get their instructional regimes faithfully implemented across multiple school settings, or both (for a review of this literature, see the essays in Rivlin and Timpane, 1975).

Our Approach

Fortunately, much has been learned in more recent decades about how to study design-based intervention programs. The general concept has been to build a “logic model” that describes a “theory of action” underlying a particular reform effort, and to use that model to lay out both the intermediate and final outcomes that reformers hope to achieve as a result of their reform efforts. Our effort to formulate such a model for the process of school improvement by design is shown in Figure 1. That figure begins on the left-hand side with the assumption that any provider of design-based assistance has a program design, which we earlier defined as a blueprint for change laid out along two dimensions: (a) an instructional design; and (b) a design for school organizational practices that encourage faithful implementation and productive use of that instructional design. Moving to the right in the figure, we have included a set of arrows describing our assumption that these designs influence the ways schools are organized to manage and support instruction and to encourage the use of particular instructional practices in schools. Finally, the arrows in Figure 1 suggest that organizational and instructional practices in schools are reciprocally related and affect student outcomes.

Figure 1: Logic Model of Design-Based Instructional Improvement

Logic Model of Design-Based Instructional Improvement

In the education policy research field, much effort has gone into building highly general conceptual frameworks to describe each step of this process. For example, Berman (1978) developed an influential conceptual framework that described intervention designs as either “programmed” or “adaptive” in order to capture the fact that programs can be more (or less) explicit and directive about the kinds of organizational and instructional practices they want to implement in schools. Others have developed conceptual frameworks to describe organizational practices for managing instruction, for example, the contrast between “mechanistic” or “organic” forms of management that Miller and Rowan (2006) used to signal the extent to which patterns of instructional supervision, monitoring, and decision making in schools are either centralized, standardized, and routinized or decentralized and flexible. Finally, attempts have been made to characterize instructional practices in schools as oriented either to “basic” or to “higher order” instructional goals (e.g., Cohen, McLaughlin, and Talbert, 1993), or to “reform” versus “traditional” practices (Stecher et al., 2006).

In this report, we depart from these familiar categories to describe how the CSR programs we studied actually worked. That is not because we take issue with these more generalized conceptual frameworks. In fact, we find them useful. However, in our own research, we have found that such general categories do not provide the kinds of nuanced descriptions of CSR program designs, intended organizational practices, or intended instructional practices that are needed to explain program-specific outcomes in a logically compelling fashion, either at the intermediate stage of our model (where we are looking at organizational and instructional change resulting from CSR participation) or at the final step of our model (where we are looking at student achievement outcomes that result from working with a specific CSR program).

The remainder of the report will roughly follow Figure 1 and sketch out our logic model for studying the process of design-based school improvement. In the next section, we discuss brief “portraits” of the three CSR designs we studied. The following three sections then report in more detail on these design-based approaches. We discuss: (a) how the different CSR models organized to promote instructional change in classrooms; (b) whether the CSR programs succeeded in moving schools toward their preferred instructional designs; and (c) whether, once implemented, the CSR programs succeeded in improving student achievement at the schools under study. Considering these topics as a whole, we seek to shed light on the specific mechanisms through which the CSR designs under study influenced student achievement, particularly reading achievement, a major emphasis of improvement in all of the schools we studied.

Next section: Portraits of Programs Under Study

References

Berends, M. & B. King. (1994). A Description of Restructuring in Nationally Nominated Schools: Legacy of the Iron Cage? Educational Policy, 8 (1), 28-50.

Berends, M., & Bodilly, S., & Kirby, S. (2002). Facing the Challenges of Whole School Reform: New American Schools After a Decade. Santa Monica, CA: RAND.

Berman (1978). Designing implementation to match policy situation: a contingency analysis of programmed and adaptive implementation. Santa Monica, CA: RAND.

Bodilly, S. (1996). Lessons from New American Schools Development Corporation’s demonstration phase. Santa Monica: RAND.

Borman, G.D., Hewes, G.M., Overman, L.T., & Brown, S. (2003). Comprehensive school reform and achievement: A meta-analysis. Review of Educational Research, 73, 125-230.

Cohen, D., McLaughlin, M.W. & Talbert, J (Eds.). (1993). Teaching for Understanding: Challenges for Policy and Practice. San Francisco: Jossey Bass.

Desimone, L. (2002). How can comprehensive school reform models be implemented? Review of Educational Research, 72(3), 433-480.

Elmore, R. F. (1996). Getting to scale with good educational practices. Harvard Educational Review, 66, 1-25.

Gersten, R. (1984). Follow Through revisited: Reflections on the site variability issue. Educational Evaluation and Policy Analysis, v. 6(4), 411-423.

House, E., & Glass, G., & McLean, L., & Walker, D. (1978). No simple answer: Critique of the follow-through evaluation. Harvard Educational Review, v. 48 (2), 128-160.

Miller, R., & Rowan, B. (2006). Effects of organic management on student achievement. American Educational Research Journal, 43(2), 219-253.

Mirel, J. (1994). The evolution of the New American Schools: From revolution to mainstream. New York: Fordham Foundation.

Rivlin, A.M. and Timpane, P. M. (1975). Planned variation in education: Should we give up or try harder? Washington, DC: Brooking Institution.

Rowan, B. (2001). The ecology of school improvement. Journal of Educational Change, 3, 283-314.

Stecher, B., V.N. Le, L. Hamilton, G. Ryan, A. Robyn, & J.R. Lockwood. (2006). Using structured classroom vignettes to measure instructional practices in mathematics. Educational Evaluation and Policy Analysis, 28(2), 1010-130.

Wayne, A.J. & P. Youngs. (2003). Teacher characteristics and student achievement gains: A review. Review of Educational Research, 73(1), 89-122.

Welch, W.W. (1969). Curriculum evaluation. Review of Educational Research, 39, 429–443.