Study of Instructional Improvement

Papers & Publications

The sections below contain references and abstracts of SII research papers and reports. Where permissible by copyright law, these papers are available for downloading in PDF format. (To view a PDF file, you will need Adobe Reader. Click here to download this free software.) For all other papers, you will need to use the citation information to obtain the articles through an academic library system.

U.S. copyright law (sections 107 & 108) stipulates that copies are made only for personal use. To request permission for other kinds of copying, such as general distribution, or for creating new collective works, please forward a written request to the publisher of the article of interest. If you are uncertain of the appropriate contact information for a particular publisher or copyright holder, feel free to contact the author(s) or email sii@umich.edu.

CSR: Organization for Change and School Improvement

Barnes, C., Camburn, E., Kim, J., & Rowan, B. (2004, April).
School leadership and instructional improvement in CSR schools. Paper presented at the annual meeting of the American Educational Research Association, San Diego, CA. (abstract)
Camburn, E., Rowan, B., & Taylor, J. (2003).
Distributed leadership in schools: The case of elementary schools adopting comprehensive school reform models. Educational Evaluation and Policy Analysis, 25(4), 347-343. (abstract) [Article posted with permission from the American Educational Research Association © 2003. All rights reserved. This work may be downloaded only. It may not be copied or used for any purpose other than scholarship. If you wish to make copies or use it for a non-scholarly purpose, please contact AERA directly.]
Cohen, D. K., & Ball, D. L. (2007).
Educational innovation and the problem of scale. In B. Schneider & S. McDonald (Eds.), Scale-up in education: Ideas in Principle (Volume I). (pp. 19-36). Lanham, MD: Rowman & Littlefield. (abstract)
Cohen, D. K., & Ball, D. L. (2000).
Instruction and innovation: Reconsidering the story. Working paper, Consortium for Policy Research in Education, Study of Instructional Improvement. Ann Arbor, MI: University of Michigan. (abstract)
Cohen, D. K., & Ball, D. L. (1999).
Instruction, capacity, and improvement. (CPRE Research Report No. RR-043). Philadelphia, PA: University of Pennsylvania, Consortium for Policy Research in Education. (abstract)
Cohen, D. K., Raudenbush, S. W., & Ball, D. L. (2003).
Resources, instruction, and research. Educational Evaluation and Policy Analysis, 25 (2), 1-24. (abstract)
Glazer, J. (2008).
External efforts at district-level reform: The case of the National Alliance for Restructuring Education. Journal of Educational Change. retrieved from: http://www.springerlink.com/content/h4112277g7872w07/fulltext.pdf. (abstract)
Peurach, D., Glazer, J., & Gates, K. (2004).
Supporting instructional improvement: Teacher learning in comprehensive school reform. Washington, D.C.: National Clearinghouse on Comprehensive School Reform. (abstract)
Rowan, B. (2001).
The ecology of school improvement. Journal of Educational Change, 3, 283-314. (abstract) [Abstract posted with permission from Springer Science and Business Media © 2002. All rights reserved. The article above is linked to the Kluwer Online website.]
Rowan, B., Camburn, E., & Barnes, C. (2004).
Benefiting from comprehensive school reform: A review of research on CSR implementation. In C. Cross (Ed.), Putting the pieces together: Lessons from comprehensive school reform research (pp. 1-52). Washington, D.C.: National Clearinghouse for Comprehensive School Reform. (abstract)
Rowan, B., & Miller, R. J. (2007).
Organizational strategies for promoting instructional change: Implementation dynamics in schools working with comprehensive school reform providers. American Educational Research Journal, 44, 252-297. (abstract) Available at: http://aer.sagepub.com/cgi/content/abstract/44/2/252
Rowan, B. (2008)
Does the school improvement industry help or prevent deep and sound change? Journal of Educational Change, 9, 197-202. retrieved from http://www.springerlink.com/content/e1k7164k21593u77/fulltext.pdf
Rowan, B., Correnti, R., Miller, R. J., & Camburn, E. (2009).
School improvement by design: Lessons from a study of comprehensive school reform designs. In B. Schnieder & G. Sykes (Eds.), Handbook of Education Policy Research. London: Taylor & Francis.
Rowan, B. & Camburn, E. & Correnti, R., & Miller, R. (in press).
How comprehensive school reform works: Insights from a study of instructional improvement. In R. Derouet (Ed.), Knowledge and Equality – How Consistent are Education and Training Policies? French-American Cross-cultural Reflections. Lyon, France: INRP.

Measuring Student Learning and Achievement

Atkins-Burnett, S., Rowan, B., & Correnti, R. (2001).
Administering standardized achievement tests to young children: How mode of administration affects the reliability and validity of standardized measures of student achievement in kindergarten and first grade. Consortium for Policy Research in Education, Study of Instructional Improvement, Research Note S-1. Ann Arbor, MI: University of Michigan. (abstract)
Rowan, B., Correnti, R., & Miller, R. J. (2001).
What large-scale, survey research tells us about the effects of teachers and teaching on student achievement. Consortium for Policy Research in Education, Study of Instructional Improvement, Research Note S-5. Ann Arbor, MI: University of Michigan. (A version of this paper also appears in Teachers College Record, 104 (8), 1525-1567). (abstract)
Hill, H. C., Rowan, B., & Ball, D. L. (2005).
Effects of teachers' mathematical knowledge for teaching on student achievement. American Educational Research Journal, 42(2), 371-406. (abstract)
Raudenbush, S. W., Hong, G., & Rowan, B. (in press).
Studying the causal effects of instruction with application to primary-school mathematics. In J. M. Ross, G. W. Bohrnstedt, and F.C. Hemphill (Eds.), Instructional and Performance Consequences of High Poverty Schooling. Washington D.C.: National Center for Education Statistics.

Examining Teachers' Content Knowledge for Teaching

Hill, H., Rowan, B., & Ball, D. (2005).
Effects of teachers' mathematical knowledge for teaching on student achievement. American Educational Research Journal. 42(2), 371-406. (abstract)
Hill, H., Schilling, S., & Ball, D. (2004).
Developing measures of teachers' mathematics knowledge for teaching. Elementary School Journal, 105, 11-30. (abstract) [Article posted with permission from the University of Chicago Press © 2004. All rights reserved. This work may be downloaded only. It may not be copied or used for any purpose other than scholarship. If you wish to make copies or use it for non-scholarly purpose, please contact the University of Chicago Press directly.
Phelps, G., & Schilling, S. (2004).
Developing measures of content knowledge for teaching reading. Elementary School Journal, 105, 31-48. (abstract) [Article posted with permission from the University of Chicago Press © 2004. All rights reserved. This work may be downloaded only. It may not be copied or used for any purpose other than scholarship. If you wish to make copies or use it for non-scholarly purpose, please contact the University of Chicago Press directly.]
Rowan, B., Schilling, S., Ball, D., & Miller, R. (2001).
Measuring teachers' pedagogical content knowledge in surveys: An exploratory study. Consortium for Policy Research in Education, Study of Instructional Improvement, Research Note S-2. Ann Arbor: University of Michigan. (abstract)

Appendix A: Detailed Results for the Domain of Mathematics

Appendix B: Detailed Results for the Domain of Reading/Language Arts

Appendix C: Item Writing Recommendations

Measuring and Improving Instructional Practice

Ball, D., Camburn, E., Correnti, R., Phelps, G., & Wallace, R. (1999).
New tools for research on instruction: A web-based teacher log. Working paper, Center for Teaching Policy. Seattle: University of Washington. (abstract)
Ball, D., & Rowan, B. (2004).
Introduction: Measuring instruction.Elementary School Journal, 105(1), 3-10.
Camburn, E., & Barnes, C. (2004).
Assessing the validity of a language arts instruction log through triangulation. Elementary School Journal, 105, 49-74. (abstract) [Article posted with permission from the University of Chicago Press © 2004. All rights reserved. This work may be downloaded only. It may not be copied or used for any purpose other than scholarship. If you wish to make copies or use it for non-scholarly purpose, please contact the University of Chicago Press directly.]
Correnti, R., & Rowan, B. (2007).
Opening up the black box: Literacy instruction in schools participating in three comprehensive school reform programs. American Educational Research Journal, 44, 298-338. (abstract) Available at:

http://aer.sagepub.com/cgi/content/abstract/44/2/298

Hill, H. (2005).
Content across communities: Validating measures of elementary mathematics instruction Educational Policy. 19(3), 447-445. (abstract)
Rowan, B., Camburn, E., & Correnti, R. (2004).
Using teacher logs to measure the enacted curriculum in large-scale surveys: A study of literacy teaching in 3rd grade classrooms. Elementary School Journal, 105, 75-102. (abstract) [Article posted with permission from the University of Chicago Press © 2004. All rights reserved. This work may be downloaded only. It may not be copied or used for any purpose other than scholarship. If you wish to make copies or use it for non-scholarly purpose, please contact the University of Chicago Press directly.]
Rowan, B., Jacob, R. & Correnti, R. (2009).
Using instructional logs to identify quality in educational settings. New Directions in Youth Development: Research, Practice, Theory, 121 (Summer)
Rowan, B. & Correnti, R. (2009).
Studying reading instruction with teacher logs: Lessons from A Study of Instructional Improvement. Educational Researcher, 38 (2), 120-131. (abstract)
Rowan, B. & Correnti, R. (2009).
Interventions to improve instruction: How implementation strategies affect instructional change. In W. K. Hoy and M. DiPaola (Eds.), Studies in school improvement: A volume in research and theory in educational administration (pp. 45-76). Charlotte, NC: Information Age Publishing.
Rowan, B., Harrison, D., & Hayes, A. (2004).
Using instructional logs to study elementary school mathematics: A close look at curriculum and teaching in the early grades. Elementary School Journal, 105, 103-127. Technical version of this paper. (abstract) [Article posted with permission from the University of Chicago Press © 2004. All rights reserved. This work may be downloaded only. It may not be copied or used for any purpose other than scholarship. If you wish to make copies or use it for non-scholarly purpose, please contact the University of Chicago Press directly.]
Rowan, B., Camburn, E. & Correnti, R. (2008).
Teacher logs as a tool for studying educational process. In, R. Belli, F. Stafford , and D. Alwin. (Eds.), Using Calendar and Diary Methods in Life Events Research. Newbury Park, CA: Sage.

Abstracts

CSR: Organization for Change and School Improvement

Barnes, C., Camburn, E., Kim, J., & Rowan, B. (2004, April). School leadership and instructional improvement in CSR schools. Paper presented at the annual meeting of the American Educational Research Association, San Diego, CA. This paper attempts to expand our understanding of distributed leadership within the context of comprehensive school reform. Using qualitative and quantitative data from a large-scale longitudinal study of three widely disseminated CSR programs, we investigate the role of leadership practice in supporting instructional improvement by examining interactions between school leaders and teachers. We also examine a number of outcomes thought to be associated with these interactions such as teachers' understanding of program goals, their motivation for improvement, and their effort at improving instruction. Our case data and survey date show that two key instructional leadership functions---developing teacher capacity and monitoring instruction---extends across multiple design elements and artifacts, over time, and over interactions with strategically selected leaders. (back to reference)

Camburn, E., Rowan, B., & Taylor, J. (2003). Distributed leadership in schools: The case of elementary schools adopting comprehensive school reform models. Educational Evaluation and Policy Analysis, 25(4), 347-343. [Article posted with permission from the American Educational Research Association © 2003. All rights reserved. This work may be downloaded only. It may not be copied or used for any purpose other than scholarship. If you wish to make copies or use it for a non-scholarly purpose, please contact AERA directly.] This is a study of distributed leadership in the context of elementary schools' adoption of comprehensive school reforms (CSR). Most CSRs are designed to configure school leadership by defining formal roles, and we hypothesized that such programs activate those roles by defining expectations for socializing (e.g., through professional development) role incumbents. Configuration and activation were further hypothesized to influence the performance of leadership functions in schools. Using data from a study of three widely adopted CSR models, support was found for the configuration and activation hypotheses. Leadership configuration in CSR schools differed from that of non-CSR schools in part because of the addition of model-specific roles. Model participation was also related to the performance of leadership functions as principals in CSR schools and CSR-related role incumbents were found to provide significant amounts of instructional leadership. Further support for activation hypotheses is suggested by positive relationships between leaders' professional development experiences and their performance of instructional leadership. (back to reference)

Cohen, D. K., & Ball, D. L. (2007). Educational innovation and the problem of scale. The U.S. has had five decades of increasing pressure for school reform, beginning before the Soviet Sputniks and continuing today. Researchers appear to believe that none of these reforms were implemented at scale – i.e., widely and well. Despite this view, most authors write as though it was reasonable to expect innovations to be adopted widely and well; when studies report innovation failure, they usually do so in a disappointed tone. Yet, like love affairs, many innovations consist more of ideas and hopes than of the carefully designed details of daily operations that often are required to make appreciable change. Our scrutiny of the literature revealed no discussions of what might reasonably be expected from innovation in education, let alone what might be expected, given the nature of schooling in the U.S. We found only three studies that sought to discern broad patterns of innovation, adoption, or implementation. The contemporary pressure for major national improvement of student learning has created an appetite for better knowledge about innovation. We sketch what is known and believed about innovation, offer some conjectures that merit investigation, and consider strategies for successful innovation. (back to reference)

Cohen, D. K., & Ball, D. L. (2000). Instruction and innovation: Reconsidering the story. Working paper, Consortium for Policy Research in Education, Study of Instructional Improvement. Ann Arbor: University of Michigan. This paper revisits the widely reported failure of instructional innovations. The key symptoms, reported in many studies, include the adoption of only selected elements of innovations, superficial enactment, extensive variability in the fragments teachers adopt, and rapid turnover of innovations in schools and classrooms. Our reconsideration arises from the view, which has grown during our studies of teaching and efforts to change it, that most efforts to explain the enactment of instructional innovations have not satisfactorily considered the central issues concerning teaching and learning. (back to reference)

Cohen, D. K., & Ball, D. L. (1999). Instruction, capacity, and improvement. (CPRE Research Report No. RR-043). Philadelphia, PA: University of Pennsylvania, Consortium for Policy Research in Education. In this paper we develop an interactive model of instruction to analyze both teaching and efforts to improve teaching. We elaborate this split-level frame as we go, drawing out implications for understanding instruction, instructional improvement, and research on the two. We conclude by summarizing the ideas and distinguishing between them from other approaches to understanding and studying school improvement. (back to reference)

Cohen, D. K., Raudenbush, S. W., & Ball, D. L. (2003). Resources, instruction, and research. Educational Evaluation and Policy Analysis, 25(2), 1-24. [Article posted with permission from the American Educational Research Association © 2003. All rights reserved. This work may be downloaded only. It may not be copied or used for any purpose other than scholarship. If you wish to make copies or use it for a non-scholarly purpose, please contact AERA directly.] Education policymakers have long believed that conventional resources, i.e., books, bricks, class size, and teacher qualifications, directly affect student learning and achievement. This paper builds on more recent research and argues that learning is affected by how resources are used in instruction, not by their mere presence or absence. If use is central to resource effects, research on the effects of resources should be broadened to include the chief influences on use, including teachers' and students' knowledge, skill, and will, and features of teachers' and learners' environments, including school leadership, academic norms, and institutional structures. We discuss how resource use is influenced by the management of instructional environments. Having framed the issues in a way that places use by teachers and learners at the center of inquiry, we then discuss research designs that would be appropriate to identify resource effects. (back to reference)

Glazer, J. (2008). External efforts at district-level reform: The case of the National Alliance for Restructuring Education. Non-government interveners have been at the forefront of school-level change for over a decade, yet little is known about their capacity to foster change at the district level. This paper develops a theoretical frame for analyzing district-level intervention and applies it to the National Alliance for Restructuring Education. The frame highlights three factors that are salient for reformers’ efforts to enhance district capability: (1) a design for change that elaborates goals, processes, and the overall change process; (2) the social and political environments that shape interveners’ capacity to sustain effective district-level intervention; and (3) the capability of the intervener’s own organization. Applying this frame to the National Alliance for Restructuring Education, a district and state-level intervention that was active in the 1990s, reveals a fundamental dilemma that district-level interveners must manage in order to sustain their efforts. Designs that provide more intensive guidance are more potent instruments for improving districts, but designs that are less detailed place far less pressure on the intervener organization by requiring less human and fiscal resources. A consequence of this is that the logic of improving districts and the logic of organizational survival can be in direct conflict with one another. (back to reference)

Peurach, D., Glazer, J., & Gates, K. (2004). Supporting instructional improvement: Teacher learning in comprehensive school reform. Washington, D.C.: National Clearinghouse on Comprehensive School Reform. The objective of this article is to help school leaders and teachers consider how comprehensive school reform programs can support instructional improvement by highlighting promising strategies for teacher learning. One strategy used variably across CSR programs involves providing instructional materials that are educative for both students and teachers. A second strategy is to provide teachers with multiple models of instructional practice, including vignettes describing instructional interactions between teachers and students, and exposure to model classrooms where teachers can observe instruction. Third, programs provide collegial learning opportunities in which teachers can work together, either one-on-one or in groups. Fourth, programs have designs for improving instructional leadership, both by creating new leadership roles and by reorienting conventional administrative roles to instructional improvement. Additional program strategies include direct technical assistance to teachers and school leaders over a period of years, or providing assistance through the use of local and national networks. (back to reference)

Rowan, B. (2001). The ecology of school improvement. Journal of Educational Change, 3, 283-314. [Abstract posted with permission from Springer Science and Business Media © 2002. All rights reserved. The article above is linked to the Kluwer Online website.] This paper explains how organizations other than schools and governing agencies affect the scope and pace of change in American education. In particular, the paper discusses a set of organizations operating in what can be called the school improvement ``industry'' in the United States, that is, a group of organizations providing schools and governing agencies with information, training, materials, and programmatic resources relevant to problems of instructional improvement. The paper shows how the structure and functioning of these organizations explain patterns of change in American education including why schools in the United States experience wave after wave of innovation and reform while at the same time maintaining a stable core of instructional practices. (back to reference)

Rowan, B., Camburn, E., & Barnes, C. (2004). Benefiting from comprehensive school reform: A review of research on CSR implementation. In C. Cross (Ed.), Putting the pieces together: Lessons from comprehensive school reform research (pp. 1-52). Washington, D.C.: National Clearinghouse for Comprehensive School Reform. This paper examines what happens when schools engage in a process of comprehensive school reform (CSR). Although this process often begins with a decision by schools to adopt a research-based "model" or "design" for school improvement, decades of research on planned educational change suggest that simply adopting a model or design, in itself, will not guarantee successful utilization of that model inside schools. Instead, successful school improvement results from a confluence of circumstances that must and can be orchestrated by external change agents (like CSR model providers), district and school leaders, and teachers and students working in cooperation with one another to implement a process of whole-school reform. The purpose of this paper is to give the reader a sense of the strategies used by schools that have successfully engaged in this process. (back to reference)

Rowan, B., & Miller, R. J. (2007). Organizational strategies for promoting instructional change: Implementation dynamics in schools working with comprehensive school reform providers. American Educational Research Journal, 44, 252-297. This article develops a conceptual framework for studying how three comprehensive school reform (CSR) programs organized schools for instructional change and how the distinctive strategies they pursued affected implementation outcomes. The conceptual model views the Accelerated Schools Project as using a system of cultural control to produce instructional change, the America's Choice program as using a model of professional control, and the Success for All program as using a model of procedural control. Predictable differences in patterns of organizing for instructional improvement emerged across schools working with these three programs, and these patterns were found to be systematically related to patterns of program implementation. In particular, the two CSR programs that were organized to produce instructional standardization produced higher levels of instructional change in the schools where they worked. The results of the study suggest organizational strategies program developers can use to obtain implementation fidelity in instructional change initiatives. (back to reference)

Measuring Student Learning and Achievement

Atkins-Burnett, S., Rowan, B., & Correnti, R. (2001). Administering standardized achievement tests to young children: How mode of administration affects the reliability and validity of standardized measures of student achievement in kindergarten and first grade. Consortium for Policy Research in Education, Study of Instructional Improvement, Research Note S-1. Ann Arbor, MI: University of Michigan. This paper reports on an experiment conducted to examine the consequences of assessing kindergarten and first-grade students' academic achievement in group versus individualized assessment settings.In the experiment, 442 students blocked by classroom and grade level were randomly assigned to one of two assessment modes≠a small group setting with 8 other students from their classroom versus an individualized setting. Students in both settings were administered the grade appropriate form of the CTB McGraw-Hill Terra Nova Tests of Achievement, Form A by trained assessors from the Institute of Social Research at the University of Michigan. Assessment results were then scored by the publisher. The results of the experiment showed that in both kindergarten and first grade, group assessment settings were more likely than individualized settings to be characterized by behavior that assessors coded as disruptive or districting for students, and that students at both grade levels who were assessed in the group setting omitted more test items and made more multiple marks on items than did students assessed in the individual setting. The study also found that kindergarten students assessed in the group setting had lower Reading, Language, and Mathematics scale scores (as estimated by the publishers' three parameter IRT model) and that these scale scores had higher standard errors of measurement for kindergarten students assessed in the group setting. However, there were no differences in measured achievement or standard errors of measurement across assessment modes among first grade students. We argue that the differences in assessment environments and item-response patterns of students in group settings call into question the validity of assessment results for young children assessed in group settings, even when such results do not result in observable differences in the measured outcomes of these children compared to students assessed individually. (back to reference)

Rowan, B., Correnti, R., & Miller, R. J. (2001). What large-scale, survey research tells us about the effects of teachers and teaching on student achievement. Consortium for Policy Research in Education, Study of Instructional Improvement, Research Note S-5. Ann Arbor, MI: University of Michigan. (A version of this paper also appears in Teachers College Record, 104 (8), 1525-1567). This paper discusses conceptual and methodological issues that arise when educational researchers use data from large-scale, survey research to examine the effects of teachers and teaching on student achievement. Using data from the Prospects: The Congressionally Mandated Study of Educational Growth and Opportunity 1991-1994, we show that researchers' use of different statistical models has led to widely varying interpretations about the overall magnitude of teacher effects on student achievement. However, we conclude that in well-specified models of academic growth, teacher effects on elementary students' growth in reading and mathematics achievement are substantial (with d-type effect sizes ranging from .72 to .85). We also conclude that various characteristics of teachers and their teaching account for these effects, including variation among teachers in professional preparation and content knowledge, use of teaching routines, and patterns of content coverage, with effect sizes of variables measuring these characteristics of teachers and their teaching showing d-type effect sizes in the range of .10. The paper concludes with an assessment of the current state of the art in large-scale, survey research on teaching. Here, we conclude that survey researchers must simultaneously improve their measures of instruction while paying careful attention to issues of causal inference. (back to reference)

Hill, H. C., Rowan, B., & Ball, D. L. (2005). Effects of teachers' mathematical knowledge for teaching on student achievement. American Educational Research Journal, 42(2), 371-406. Acting on the assumption that improved teacher knowledge will yield gains in student achievement, scholars, and policy-makers have focused increasing attention and resources on improving teachers' mathematical knowledge for teaching. Content-focused professional development, mathematically-supportive curriculum materials, and redesigned pre-service preparation programs are all examples of this effort. However, few studies have empirically demonstrated that teachers' mathematical knowledge is related to student achievement, especially at the elementary level. Further, existing studies have neglected to explore key questions about how this relationship is constituted. Using data from students, teachers, and schools participating in a large study of comprehensive school reform, and using novel measures that capture both common and specialized mathematical knowledge for teaching, we explore the degree to which teachers' mathematical knowledge contributes to gains in student achievement. We find a positive effect of teacher mathematical knowledge on first and third graders' gain scores. We investigate the linearity of this relationship, discuss other findings from our models, and suggest implications for policy, professional development, and further research. (back to reference)

Examining Teachers' Content Knowledge for Teaching

Hill, H., Rowan, B., & Ball, D. (in press). Effects of teachers' mathematical knowledge for teaching on student achievement. American Educational Research Journal. Acting on the assumption that improved teacher knowledge will yield gains in student achievement, scholars, and policy-makers have focused increasing attention and resources on improving teachers' mathematical knowledge for teaching. Content-focused professional development, mathematically-supportive curriculum materials, and redesigned pre-service preparation programs are all examples of this effort. However, few studies have empirically demonstrated that teachers' mathematical knowledge is related to student achievement, especially at the elementary level. Further, existing studies have neglected to explore key questions about how this relationship is constituted. Using data from students, teachers, and schools participating in a large study of comprehensive school reform, and using novel measures that capture both common and specialized mathematical knowledge for teaching, we explore the degree to which teachers' mathematical knowledge contributes to gains in student achievement. We find a positive effect of teacher mathematical knowledge on first and third graders' gain scores. We investigate the linearity of this relationship, discuss other findings from our models, and suggest implications for policy, professional development, and further research. (back to reference)

Hill, H., Schilling, S., & Ball, D. (2004). Developing measures of teachers' mathematics knowledge for teaching. In this paper, we discuss efforts to design and empirically test measures of teachers' content knowledge for teaching elementary mathematics. We begin by reviewing the literature on teacher knowledge, taking special note of how scholars have organized such knowledge. Next we describe survey items we wrote to represent knowledge for teaching mathematics and results from factor analysis and scaling work with these items. We found that teachers' knowledge for teaching elementary mathematics is multi-dimensional, and includes knowledge of various mathematical topics (e.g., number and operations, algebra) and domains (e.g., knowledge of content; knowledge of students). The constructs indicated by factor analysis form psychometrically acceptable scales. (back to reference)

Phelps, G., & Schilling, S. (2004). Developing measures of content knowledge for teaching reading. Elementary School Journal, 105, 31-48. [Article posted with permission from the University of Chicago Press © 2004. All rights reserved. This work may be downloaded only. It may not be copied or used for any purpose other than scholarship. If you wish to make copies or use it for non-scholarly purpose, please contact the University of Chicago Press directly.] In this article we present results from a project to develop survey measures of the content knowledge teachers need to teach elementary reading. In areas such as mathematics and science, there has been great interest in the special ways teachers need to know the subject to teach it to others – often referred to as pedagogical content knowledge. However, little is known about what teachers need to know about reading to teach it effectively. We begin the article by discussing what might constitute content knowledge for teaching in the area of reading and by describing the items we wrote. Next, factor and scaling results are presented from a pilot of 261 multiple choice items with 1,542 elementary teachers. We found that content knowledge for teaching reading includes multiple dimensions, defined both by topic and by how teachers use knowledge in teaching practice. Items within these constructs form reliable scales. (back to reference)

Rowan, B., Schilling, S., Ball, D., & Miller, R. J. (2001). Measuring teachers' pedagogical content knowledge in surveys: An exploratory study. Consortium for Policy Research in Education, Study of Instructional Improvement, Research Note S-2. Ann Arbor: University of Michigan. This paper discusses the efforts of a group of researchers at the University of Michigan to develop survey-based measures of what Lee S. Shulman (1986; 1987) called teachers' "pedagogical content knowledge." The researchers discuss their rationale for using a survey instrument to measure teachers' pedagogical content knowledge and report on the results of a pilot study in which a bank of survey items was developed to directly measure this construct in two critical domains of the elementary school curriculum – reading/language arts and mathematics. The results demonstrate that particular facets of teachers' pedagogical content knowledge can be measured reliably with as few as 6-10 survey items. However, the researchers warn that additional methodological and conceptual issues must be addressed if sound, survey-based measures of teachers' pedagogical content knowledge are to be developed for use in large-scale, survey-based research on teaching. (back to reference)

Measuring and Improving Instructional Practice

Ball, D., Camburn, E., Correnti, R., Phelps, G., & Wallace, R. (1999). New tools for research on instruction: A web-based teacher log. Working paper, Center for Teaching Policy. Seattle: University of Washington. This paper reports on the initial development and pilot testing of a Web-based instrument designed to collect daily data on instruction. This instrument, referred to as the teacher log, was being developed for use in the Study of Instructional Improvement, a large-scale, longitudinal study focusing on school improvement in high poverty schools. Although the instructional log we are ultimately using in this research is not a Web-based tool, and its features are both similar and different from the one under discussion in this report, we think that there are elements of our early instrument design work that may be of use to others interested in the development of such tools to track instruction. (back to reference)

Camburn, E., & Barnes, C. (2004). Assessing the validity of a language arts instruction log through triangulation. Elementary School Journal, 105, 49-74. [Article posted with permission from the University of Chicago Press © 2004. All rights reserved. This work may be downloaded only. It may not be copied or used for any purpose other than scholarship. If you wish to make copies or use it for non-scholarly purpose, please contact the University of Chicago Press directly.] In this study we attempted to illuminate why measures of instruction sometimes fail to meet discrete tests of validity. We used triangulation strategy – multiple methods, data sources, and researchers – to investigate teachers' and observers' reports on a daily language arts log. Data came from the pilot study of the log conducted in 8 urban public elementary schools. Statistical results increased our confidence in the log's ability to measure: a) instruction at grosser levels of detail, b) instructional activities that occurred more frequently, and c) word analysis instruction. Some qualitative evidence gave us greater confidence in the instrument-for-example, when teachers differed from observers because they possessed background knowledge not available to observers. Other qualitative evidence illustrated dilemmas inherent in measuring instruction. Overall, we believe triangulation strategies provided a more holistic understanding of the validity of teachers' reports of instruction than past validity studies. (back to reference)

Correnti, R., & Rowan, B. (2007). Opening up the black box: Literacy instruction in schools participating in three comprehensive school reform programs. American Educational Research Journal, 44, 298-338. This study examines patterns of literacy instruction in schools adopting three of America’s most widely disseminated comprehensive school reform (CSR) programs (the Accelerated Schools Project, America’s Choice, and Success for All). Contrary to the view that educational innovations seldom affect teaching practices, the study found large differences in literacy instruction between teachers in America’s Choice schools and comparison schools and between teachers in Success for All schools and comparison schools. In contrast, no differences in literacy teaching practices were found between teachers in Accelerated Schools Project schools and comparison schools. On the basis of these findings and our knowledge of the implementation support strategies pursued by the CSR programs under study, we conclude that well-defined and well-specified instructional improvement programs that are strongly supported by on-site facilitators and local leaders who demand fidelity to program designs can produce large changes in teachers’ instructional practices. (back to reference)

Hill, H. (2005). Content across communities: Validating measures of elementary mathematics instruction. Educational Policy. In recent years, scholars have problematized terms used to describe instruction on teacher survey instruments. When scholars, observers, and teachers employed terms like "discuss" and "investigate," these authors found, they often meant to describe quite different events (Mayer 1999; Spillane & Zeuli 1999; Stigler, Gonzales, et al, 1999). This paper problematizes another set of terms often found on survey instruments, those describing mathematical content. To do so, it examines terms such as "geometry," "number patterns" and "ordering fractions" for rates of agreement and disagreement between teachers and observers participating in a field pilot of an elementary mathematics daily log. Using interviews, written observations, and reflections on disagreements, this paper is also able to ask why disagreements occurred. Sources of disagreement included problems with instrument design, memory/perception, and, notably, differences in the way language is used in different communities – university mathematicians, elementary teachers, and mathematics educators – to give meaning to subject matter terms. Theoretical and practical implications of these sources of disagreement are explored. (return to reference)

Rowan, B., Camburn, E., & Correnti, R. (2004). Using teacher logs to measure the enacted curriculum in large-scale surveys: A study of literacy teaching in 3rd grade classrooms. Elementary School Journal, 105, 75-102. [Article posted with permission from the University of Chicago Press © 2004. All rights reserved. This work may be downloaded only. It may not be copied or used for any purpose other than scholarship. If you wish to make copies or use it for non-scholarly purpose, please contact the University of Chicago Press directly.] In this article we examine methodological and conceptual issues that emerge when researchers measure the enacted curriculum in schools. After outlining key theoretical considerations that guide measurement of this construct and alternative strategies for collecting and analyzing data on it, we illustrate 1 approach to gathering and analyzing data on the enacted curriculum. Using log data on the reading/language arts instruction of more than 150 third-grade teachers in 53 high-poverty elementary schools participating in the Study of Instructional Improvement, we estimated several hierarchical linear models and found that the curricular content of literacy instruction: (a) varied widely from day-to-day; (b) did not vary much among students in the same classroom; but (c) did vary greatly across classrooms, largely as the result of teachers' participation in 1 of the 3 instructional improvement interventions (Accelerated Schools, America's Choice, and Success for All) under study. The implications of these findings for future research on the enacted curriculum are discussed. (back to reference)

Rowan, B., & Correnti, R. (2009). Studying reading instruction with teacher logs: Lessons from A Study of Instructional Improvement. This article describes some of the conceptual and methodological issues that arise when researchers use teacher logs to measure classroom instruction. Data and examples come from the Study of Instructional Improvement, which used teacher logs to study patterns of literacy instruction in schools implementing three comprehensive school reforms. Over the course of this study, more than 75,000 logs were collected from nearly 2,000 teachers in Grades 1 through 5. This article discusses why teacher logs were chosen as the data collection strategy, various psychometric issues associated with their use, and some of the substantive findings that emerged as part of the study. (back to reference)

Rowan, B., Harrison, D., & Hayes, A. (2004). Using instructional logs to study elementary school mathematics: A close look at curriculum and teaching in the early grades. Elementary School Journal, 105, 103-127. Technical version of this paper. [Article posted with permission from the University of Chicago Press © 2004. All rights reserved. This work may be downloaded only. It may not be copied or used for any purpose other than scholarship. If you wish to make copies or use it for non-scholarly purpose, please contact the University of Chicago Press directly.] In this article we describe the mathematics curriculum and teaching practices in a purposive sample of high-poverty elementary schools working with 3 of the most widely disseminated comprehensive school reform programs in the United States. Data from 19,999 instructional logs were completed by 509 first-, third-, and fourth-, grade teachers in 53 schools showed that the mathematics taught in these schools was conventional despite a focus on instructional improvement. The typical lesson focused on number concepts and operations, had students working mostly with whole numbers (rather than other rational numbers), and involved direct teaching or review and practice of routine skills. However, there was wide variation in content coverage and teaching practice within and among schools, with variability among teachers across schools. The results provide an initial view of the state of mathematics education in a sample of schools engaged in comprehensive school reform and suggest some future lines of work. (return to reference)