The following post appeared May 12, 2011. Since then it has been the most read post I have written–nearly 28,000 views. I am updating it with a few changes in language and additional studies and comments that were not in the original post.
I like numbers. Numbers are facts: blood pressure reading is 145/90. Numbers are objective, free of emotion. The bike odometer tells me that I traveled 17 miles. Objective and factual as numbers may be, still we inject meaning into them. The blood pressure reading, for example, crosses the threshold of high blood pressure and needs attention. And that 17-mile bike ride meant a chocolate-dipped vanilla cone at a Dairy Queen.
Which brings me to a school reform effort centered on numbers. Much has already been written on the U.S. obsession with standardized test scores. Ditto for the recent passion forvalue-added measures. I turn now to policymakers who gather, digest, and use a vast array of numbers to reshape teaching practices.
Yes, I am talking about data-driven instruction–a way of making teaching less subjective, more objective, less experience-based, more scientific. Ultimately, a reform that will make teaching systematic and effective. Standardized test scores, dropout figures, percentages of non-native speakers proficient in English–are collected, disaggregated by ethnicity and school grade, and analyzed. Then with access to data warehouses, staff can obtain electronic packets of student test data that can be used for instructional decision-making to increase academic performance. Data-driven instruction, advocates say, is scientific and consistent with how successful businesses have used data for decades to increase their productivity.
An earlier incarnation appeared four decades ago. Responding to criticism of failing U.S. schools, policymakers established “competency tests” that students had to pass to graduate high school. These tests measured what students learned from the curriculum. Policymakers believed that when results were fed back to principals and teachers, they would realign lessons. Hence, “measurement-driven” instruction.
Of course, teachers had always assessed learning informally before state- and district-designed tests. Teachers accumulated information (oops! data) from pop quizzes, class discussions, observing students in pairs and small groups, and individual conferences. Based on these data, teachers revised lessons. Teachers leaned heavily on their experience with students and the incremental learning they had accumulated from teaching 180 days, year after year.
Both subjective and objective, such micro- decisions were both practice- and data-driven. Teachers’ informal assessments of students gathered information directly and would lead to altered lessons. Analysis of annual test results that showed patterns in student errors helped teachers figure out better sequencing of content and different ways to teach particular topics.
In the 1990s and, especially after No Child Left Behind became law in 2002, the electronic gathering of data, disaggregating information by groups and individuals, and then applying lessons learned from analysis of tests and classroom practices became a top priority. Why? Because stigma and high-stakes consequences (e.g., state-inflicted penalties) occurred from public reporting of low test scores and inadequate school performance that could lead to a school’s closure, negative teacher evaluations, and students dropping out.
Now, principals and teachers are awash in data.
How do teachers use the massive data available to them on student performance? Researcher Viki Young studied four elementary school grade-level teams in how they used data to improve lessons. She found that supportive principals and superintendents and habits of collaboration increased use of data to alter lessons in two of the cases but not in the other two. She did not link the work of these grade-level teams to student achievement. In another study of 36 instances of data use in two districts, Julie Marsh and her colleagues found 15 where teachers used annual tests, for example, in basic ways to target weaknesses in professional development or to schedule double periods of language arts for English language learners. Researchers pointed out how timeliness of data, its perceived worth by teachers, and district support limited or expanded the quality of analysis. These researchers admitted, however, that they could not connect student achievement to the 36 instances of basic to complex data-driven decisions in these two districts.
Yet policymakers assume that micro- or macro-decisions driven by data will improve student achievement just like those productivity increases and profits major corporations accrue from using data to make decisions. Wait, it gets worse.
In 2009, the federal government published a report ( IES Expert Panel) that examined 490 studies where data was used by school staffs to make instructional decisions. Of these studies, the expert panel found 64 that used experimental or quasi-experimental designs and only six–yes, six–met the Institute of Education Sciences standard for making causal claims about data-driven decisions improving student achievement. When reviewing these six studies, however, the panel found “low evidence” (rather than “moderate” or “strong” evidence) to support data-driven instruction. In short, the assumption that data-driven instructional decisions improve student test scores is, well, still an assumption not a fact.
Numbers may be facts. Numbers may be objective. Numbers may smell scientific. But we give meaning to these numbers. Data-driven instruction may be a worthwhile reform but as an evidence-based educational practice linked to student achievement, rhetoric notwithstanding, it is not there yet.