A Network Connecting School Leaders From Around The Globe
By Kathleen Porter-Magee / August 8, 2013
Thomas Fordham Institute
On Wednesday, New York officials released results from the state’s first administration of the new, more difficult, Common Core–aligned tests. As officials warned—and as everyone knew—the results were low; shockingly low in some instances. Last year, 47 percent of New York students scored at or above proficiency on the state’s old English language arts exam and 60 percent were proficient or better in math. This year, 26 percent were at or above proficiency in ELA and 30 percent in math.
Critics were predictably outraged, accusing state and city leaders of having “unrealistically high” expectations for their students. While New York leaders have certainly made some missteps over the past several years, withabsurd commissioned passages, scoring errors, and questionable links between Pearson curricula and statewide assessments, they have been unwavering in their support for more rigorous standards and in their desire to align state tests and proficiency cut scores with those standards.
Yet, reform critics and parents have asked why the state would raise the bar so high that so many students score below proficient?
The purpose is simple: to ground the work of our schools in an honest understanding of how our students are actually doing. For too long, we set the bar based not on academic standards but rather, it seemed, on political calculations.
A high school diploma—or high school proficiency rate—is supposed to be a meaningful indicator that the student who has earned it has mastered content that would prepare her for what lies beyond—at a minimum, for credit-bearing coursework in college or for gainful employment. We know that, in too many places, “proficiency” cut scores and high school diplomas do neither.
That evidence comes from a variety of sources. The number of high school graduates who need to take remedial courses in college, for example, is alarmingly high. According to the NCES, fully 30 percent of blacks and Hispanics who matriculate into college reported taking remedial courses to learn things that they should have learned in high school–level courses. Ditto 20 percent of white students. And that is just among those who opted into college. What about the students who self-selected out? The simple truth is that large numbers—perhaps a majority—of high school graduates have not mastered what they need to master.
The Common Core, then, do not attempt to arbitrarily “raise the bar” for the sake of making school more difficult (or making schools look bad). They attempt to align minimum K–12 expectations with those of the colleges and universities who will be accepting our graduates to ensure that they actually master the content they are supposed to while they are in our public schools.
That is a distinction with a very real difference for our children.
Of course, that’s not to say that there aren’t very important debates we should be having about the accountability we tie to results from state assessments. After the recent flap in Indiana, several of us at Fordham voiced concerns and questions about state accountability systems, and I’ve certainlyquestioned the value of state-driven teacher-accountability reforms.
But while those are important discussions, they are separate. Today, the story is more straightforward. And to understand the implications of the test-score release, it’s important to keep focused on the facts. To that end, let’s remember three things:
1. This year’s New York test scores are a baseline that will be used to calculate future growth scores.
Because this year’s test is so different, the results from this test will be used as a baseline. We can’t—and shouldn’t—try to extrapolate achievement gains or losses based on the released data. It’s a different game now.
2. Test-score drops in New York do not mean schools or teachers have suddenly gotten worse.
The Common Core State Standards are different from—and in many cases higher than—the New York standards they replaced. So, even if students were objectively doing exactly the same this year as they did last year, scores would drop. Indeed, New York City’s Chief Academic Officer Shael Polakow-Suransky warned that “you can’t really compare results from this year’s test with results from previous New York State tests because they’re not just slightly different tests, they’re dramatically different tests.”
3. Results from this year’s New York State test will not be used to punish teachers, students, or schools.
Critically, the department of education assures that scores from this year’s test will not be used to unfairly penalize teachers and principals. Guidance released by the Department this week urges school and district leaders “to be thoughtful to ensure these proficiency results have no negative impact on students, schools, districts, or teachers.” While I have questions about state teacher-evaluation plans, I am glad to hear that baseline results from the first Common Core tests won’t be linked to teacher and principal evaluation.
The state also released 25 percent of the assessment items, along with information about how each item measures the aligned standard. Given the importance of state assessments to Common Core implementation and statewide accountability, we can and should scrutinize those items for their alignment to the content and rigor of the CCSS. And that information will be critical for educators seeking to use the assessments to help drive planning and instruction.
As we’ve long said, adoption of the Common Core was, in many ways, the easy part. Faithful implementation of the standards, and steadfast commitment to the goal of aligning our K–12 expectations with college and career readiness, will be far more difficult. Now the commitment of New York’s leaders, as well as of Common Core supporters around the country, is being tested. They are doing the right thing and they deserve our support.