Study of Instructional Improvement

Processes for Instructional Improvement

Having presented initial portraits of the CSR schools under study, we turn now to several detailed discussions of the data analyses from which these portraits were constructed. The first analysis to be discussed in this paper examines the organizational processes for school improvement found in the four groups of schools under study. A detailed discussion of these findings has been presented in Rowan and Miller (2007). That paper used measures of organizational processes constructed from surveys administered annually over the 4-year course of the study to approximately 5,500 teachers and about 800 school leaders. Specifically, measures of organizational processes for school improvement were developed from teacher and school leader survey responses in four analytic domains: (1) the degree of instructional guidance and standardization in schools; (2) the degree of intensive instructional leadership present in schools; (3) schools’ emphasis on faculty innovation, discretion, and autonomy; and (4) the strength of faculty professional communities in schools.

Overall, a total of 12 separate measures (view here) of these organizational processes were constructed in three steps. First, person-level scale scores were created by applying the Rasch model (a one parameter item response theory model) to the survey responses of teachers and school leaders (see Bond and Fox, 2001 for an accessible discussion of the Rasch model). We used the Rasch modeling software program, Winsteps, and provide examples of Winsteps control and output files using the "instructional guidance" measure as an example (view here). Second, these scores were modeled as outcome measures in three-level hierarchical linear models (HLM) which nested individuals’ annual scores on a particular scale within individuals, who were then nested within schools. In a third step, school-level scores on the outcome measures were derived from these HLM models (review steps here). Specifically, the variables reported in this section are school-level empirical Bayes residuals from the models and indicate a school’s average score on a measure, aggregated across all teachers (or leaders) and all four years; after controlling for staff characteristics, student characteristics, and school size. In all instances, these empirical Bayes (EB) residuals are standardized scores with mean = 0 and standard deviation = 1. The results reported below are based on comparisons of the overall means of the four quasi-experimental groups on the 12 outcome measures, where group means were “bracketed” by their standard errors. By comparing the ranges defined by the standard errors of the means, we were able to get a sound understanding of both the magnitude and degree of uncertainty of group mean differences in these organizational measures.


As expected, the four quasi-experimental groups differed in important ways with respect to the amount of instructional guidance and standardization that was reported to exist in schools. Here, three separate measures of instructional guidance were examined: (a) school leaders’ reports of the extent to which there was a press for a standardization of practice in the school; (b) teachers’ reports of how closely their improvement efforts were monitored; and (c) teachers’ reports of instructional guidance they received.

Rowan and Miller (2007) showed that schools participating in the AC and SFA programs exhibited much higher levels of instructional guidance and standardization than schools participating in the ASP program and comparison schools. For example, we have found that among the four quasi-experimental groups in SII, the press for standardization was reported to be much greater in schools implementing AC and SFA than in comparison or ASP schools. In addition, teachers in AC and SFA schools felt that their improvement efforts were monitored more closely than did teachers in ASP and comparison schools. Teachers in schools implementing AC and SFA also reported receiving greater levels of instructional guidance than teachers in ASP and comparison schools, however, mean differences on the instructional guidance variable were not statistically significant (view graph 1).

The second domain of organizational processes examined was instructional leadership. Here, school leaders reported on three dimensions of instructional leadership —their involvement in staff development, their advising of teachers on matters of instruction, and their efforts at setting a vision for teaching and learning in the school. The data indicate that schools implementing the AC design were higher than schools in the other three groups on all three dimensions of instructional leadership. The differences between AC schools and schools in the other three groups are statistically significant in nearly every case. These findings on leadership processes illustrate a key difference in implementation strategy between SFA and AC. Even though both programs provided teachers with strong instructional guidance, SFA’s strategy of procedural controls relied primarily on scripted lesson routines to secure faithful implementation, while AC relied more on having school leaders work closely with teachers to help teachers “learn” the design (view graph 2).

The third organizational process we examined was schools’ emphasis on teacher autonomy and innovation. Here, we examined a measure of leaders’ reports of teacher autonomy, leaders’ reports of the prevalence of values-based decision making in the school, and teachers’ reports of support for innovation in the school. Given ASP’s strategy of cultural control, we anticipated that ASP schools would have higher average scores than AC and SFA schools on these three measures. The results confirmed this prediction. The means for schools implementing the Accelerated Schools Project design were higher than the means of the three other quasi-experimental groups on all three measures of teacher autonomy and innovation, and with only a few exceptions, these mean differences were statistically significant (view graph 3).

The fourth and final organizational process examined was the strength of professional community in the schools under study, which was assessed through three measures: teachers’ reports of trust and respect among the faculty, the prevalence of collaboration on instruction, and the prevalence of critical discourse among school staff. Recall that the ASP strategy of cultural control relied strongly on teachers to generate locally-proposed instructional improvements. For this approach to be successful, however, schools would seem to need strong professional communities in which high levels of trust, critical discourse, and collaboration were evident. We found evidence that this was indeed the case in ASP schools. As with the indicators of teacher autonomy and innovation, ASP schools were higher than the other three quasi-experimental groups on all three measures of professional community, though the differences on these indicators were not as large as differences found in the areas of autonomy and innovation. In particular, ASP teachers reported substantially greater levels of trust among faculty members than did teachers in AC schools; and further, ASP teachers also reported a greater prevalence of critical discourse among colleagues than did teachers in AC schools (view graph 4).


Our analyses of data on organizational processes in schools illustrate how externally-designed and operated instructional improvement programs can pursue very different strategies to produce instructional change in schools. In particular, the survey data just discussed show that ASP’s strategy of cultural control led to the development of school environments that were quite strong in professional community (i.e., trust among faculty, critical discourse, and teacher collaboration) and led to a great deal of teacher autonomy in pursuit of classroom instructional innovations. In contrast, AC and SFA used very different approaches to instructional reform. SFA, for example, pursued a strategy of “procedural controls” to promote instructional change, and as our data show, this approach led to relatively high levels of instructional guidance and to an associated press for instructional standardization. But, as we saw, in SFA, the heavy reliance on procedures and routines appeared to act as an organizational substitute for instructional leadership, and conversely failed to stimulate a sense of strong professional community. Finally, America’s Choice also emphasized a significant amount of guidance and press for instructional standardization as part of its instructional improvement strategy, but it did so not by emphasizing scripted instructional routines, but rather by encouraging development of strong instructional leadership in schools. As with SFA, this emphasis on standardization and leadership worked against the formation of strong professional communities and also decreased the press for innovation and autonomy in AC schools.


Bond, T.G. and Fox, C.M. (2001) Applying the Rasch Model: Fundamental Measurement in the Human Sciences. Mahwah NJ: Lawrence Erlbaum Assoc.

Rowan, B. and Miller, R.J. (2007). Organizational strategies for promoting instructional change: Implementation dynamics in schools working with comprehensive school reform providers. American Educational Research Journal, 44, 252-297.


Graph 1
Instructional Guidance
(Return to Organizational Process section)

Graph 2
Instructional Leadership
(Return to Organizational Process section)

Graph 3
Teacher Autonomy
(Return to Organizational Process section)

Graph 4
Professional Community
(Return to Organizational Process section)