Study of Instructional Improvement

Patterns of Literacy Instruction

An interesting question is whether the different approaches used by CSR programs in our study for promoting instructional change were consequential, especially in promoting distinctive instructional practices in the schools under study. Results from SII provide evidence on this issue. As we are about to see, both AC and SFA managed to get their preferred instructional practices implemented faithfully in schools, whereas instruction in ASP schools was indistinguishable from that observed in comparison schools.

To examine instructional practices in the schools under study, SII researchers analyzed data from 75,689 instructional logs that were collected from 1,945 classroom teachers in grades 1 through 5 over the course of the study (data available here). In general, log data have been analyzed by SII researchers using three-level hierarchical linear models that nest multiple log reports within teachers within schools (an example for this current analysis, here). The point of the analyses reported here was to test for mean differences in instructional practices across schools in the different quasi-experimental groups after adjusting, through propensity score stratification (discussed here) for many different school-level, pre-treatment covariates, as well as important lesson and teacher level characteristics that might differ across quasi-experimental groups.

Results

The results of these analyses have been discussed in considerable detail in Correnti and Rowan (2007). In particular, these researchers reported very distinctive patterns of instruction for schools in the AC and SFA quasi-experimental groups, but little distinctiveness of instruction for the ASP schools in the sample. In the analyses presented here, we discuss the findings from this paper in three main areas of literacy instruction: word analysis, reading comprehension, and writing. (Readers wishing to observe the measures used in these three domain areas may view tables for word analysis, reading comprehension, and writing.)

We begin by noting that we observed no significant differences in literacy teaching practices between ASP schools in the study and the comparison schools. That means that, on average, students in ASP schools would have experienced instructional opportunities that were virtually the same as students in comparison schools. This result is not surprising, especially in light of the school improvement strategy pursued by the ASP program. ASP’s strategy of “cultural controls” did not prescribe specific instructional practices in the area of literacy but rather left it up to individual schools and their teachers to determine which instructional practices to implement. When left largely to their own devices, teachers in ASP schools apparently implemented the same patterns of instruction that were common in comparison schools. Evidence of this observation is shown in Figure 1 and Figure 2. This indicates that ASP’s approach to reform, similar to at least some other unsuccessful reform efforts, is not well-suited for creating large-scale instructional changes.

By contrast, Correnti and Rowan (2007) found substantial differences in literacy instruction between teachers in AC and comparison schools. Moreover, these differences occurred precisely where the AC instructional design was most prescriptive – in the area of writing and in the production of written text by students. In fact, the magnitude of these differences was quite large by social science standards. Controlling for lesson, teacher, and school characteristics, for example, Correnti and Rowan (2007) showed that AC teachers focused on writing in 54% of all lessons, whereas comparison teachers focused on writing in just 38% of all lessons. AC teachers also differed in the instructional practices and curricular content they covered when they taught writing. For example, on days when writing was taught, AC teachers were more likely than comparison teachers to have engaged in 6 of the 10 writing-related instructional practices measured by SII researchers. In particular, when they taught writing, AC teachers were more likely than comparison teachers to also have the lesson focus on reading comprehension and to directly integrate work in reading comprehension with their work in writing. They also were more likely to explicitly teach the writing process, more likely than comparison teachers to provide instruction on literary techniques or different writing genres, and more likely to have students share their writing and do substantive revisions to their writing. Additionally, AC teachers were more likely than comparison teachers to have their students write multiple connected paragraphs as they taught writing. Figure 3 and Figure 4 detail the findings discussed above. Sensitivity analyses revealed that these findings were not likely due to omitted variable bias (Correnti and Rowan, 2007). Finally, we have recently examined measures of variability of these measures of writing instruction. This evidence is compelling because a measure of variability—the confidence interval for the coefficient of dispersion (Bonnet and Seier, 2006)—reveals that variability in writing instruction among AC teachers was less than it was for comparison teachers, or indeed, for teachers in each of the other CSR designs. Thus, not only did AC teachers have higher means, on average, they also were less variable in their use of these instructional strategies. This is further evidence of the design’s effect on literacy instruction. Moreover, this reduction in variation was largely due to a reduction in variance among teachers within schools and less to a reduction in variation across schools.

Correnti and Rowan (2007) also found large differences in instruction between SFA and comparison schools. In SFA schools, teachers were more likely to teach reading comprehension on a daily basis and they also taught comprehension differently from comparison teachers when it was taught. Here, for example, the average SFA teacher taught reading comprehension in 65% of all lessons, while the average comparison school teacher taught comprehension in 50% of all lessons. Moreover, when reading comprehension was taught, SFA teachers were more likely than comparison group teachers to use teacher directed instruction, to focus on literal comprehension strategies, to check students’ comprehension by eliciting brief answers from students, and (due to extensive use of cooperative grouping arrangements) to have students discuss text with one another. Figure 5 illustrates many of these reported findings. It is noteworthy also that teachers in SFA schools did not compromise any other aspect of comprehension instruction in order to obtain these significant differences. That is, in lessons where comprehension was taught, teachers in SFA schools were no less likely than comparison school teachers to focus on more advanced reading strategies or write extended text about what they read. They did, however, more frequently provide direct instruction on reading strategies with more frequent checks for student understanding requiring brief oral or written answers from students. And, as was the case with AC schools, teachers in SFA showed less variability in their reading comprehension instruction than did teachers in the comparison schools or teachers in schools participating in the other CSR designs.

Summary

The analysis of literacy instruction practices in CSR schools is important for two reasons. First, it suggests that the ways in which CSR programs organized schools for instructional improvement was consequential, not only for the kinds of organizational processes that emerged to support instructional change within schools, but also for the kinds of instructional practices that ended up being implemented. The evidence presented thus far suggests that although ASP’s use of cultural controls promoted a strong professional community of teachers working hard on instructional innovation, the lack of a clear instructional design or strong instructional guidance for teachers, coupled with weak instructional leadership, tended to produce quite ordinary instruction that was not different from what was observed in comparison schools. By contrast, AC and SFA were far more prescriptive in their instructional designs. Both used different, but apparently quite effective strategies of “professional” and “procedural” controls to stimulate instructional change, and in both cases, SII researchers observed very distinctive forms of instructional practice in program schools.

A second point is that although both AC and SFA were “prescriptive” in instructional design and developed organizational processes in schools that emphasized faithful implementation of their preferred instructional designs, the instructional designs implemented in AC and SFA schools were quite different. Literacy instruction in AC schools was “literature-based” in emphasis. As a result, students were far more likely to be exposed to direct instruction in writing and to work on extended writing assignments than were students in other schools. By contrast, SFA’s instructional design placed more of an emphasis on what might be called “skills-based” reading instruction, that is, explicit instruction in reading comprehension tasks, coupled with a tendency to have students work on providing brief written and oral answers to check for basic comprehension. As we demonstrate in the next section, these differences in instructional practices provide at least one explanation for the patterns of reading achievement found in SII schools.

The findings presented here are based on cross-sectional comparisons of teachers’ instruction in treatment and comparison schools. But it is important to consider these results from the students’ perspective, since the instructional differences observed in a single year add up over time for students who remain in the treated schools. For example, across grades 3-5 in our study, students in SFA schools experienced about 28% more reading comprehension instruction (341 days versus 265) than did students in comparison schools. Similarly, students in AC schools experienced about 36% more writing instruction (264 days versus 194) than students in the comparison schools. These differences are quite substantial for that portion of students who did not move out of the treated schools. Unfortunately, however, rates of student mobility in high poverty schools can be very high. In the SII sample, for example, only 46% of the students originally sampled in 3rd grade remained in their same school by the end of 5th grade. Given our logic model (that differences in achievement growth are likely to be caused by differences in instruction), the accumulated instructional histories of students provide two working hypotheses. One hypothesis is that students in SFA and AC schools are more likely to show differences in achievement growth due to the large differences in accumulated instruction in target areas. A second hypothesis is that such gains will be especially true for non-mobile students who remain in treated schools for multiple years and thus benefit from a greater increase in instructional opportunities than do mobile students in those schools.

References

Bonett, D., & Seier, E. (2006). Confidence interval for a coefficient of dispersion in nonnormal distributions. Biometrical Journal, 48(1), 144-148.

Correnti, R. and Rowan, B. (2007). Opening up the black box: Literacy instruction in schools participating in three comprehensive school reform programs. American Educational Research Journal, 44, 298-338.