Faced with the highly unpopular law on teacher evaluations rushed through the Legislature by Governor Cuomo with minimal consideration or debate, seven members of the 17-member New York State Board of Regents issued a vigorous dissent. The law requires that 50% of teacher evaluations be based on test scores, a number that is not supported by research or experience. Unlike the Governor and the Legislature, these seven members of the Regents have demonstrated respect for research and concern for the consequences of this hastily-passed law on teachers, children, principals, schools, and communities. They are courageous, they are wise, and they are visionaries. They have shown the leadership that our society so desperately needs. All New Yorkers are in their debt.
I place these wise leaders on the blog honor roll.
The dissident Regents issued the following statement:
Position Paper Amendments
to Current APPR Proposed Regulations
BY SIGNATORIES BELOW JUNE 2, 2015
We. the undersigned, have been empowered by the Constitution of the State of New York and appointed by the New York State Legislature to serve as the policy makers and guardians of educational goals for the residents of New York State. As Regents, we are obligated to determine the best contemporary approaches to meeting the educational needs of the state’s three million P-12 students as well as all students enrolled in our post secondary schools and the entire community of participants who use and value our cultural institutions. |
We hold ourselves accountable to the public for the trust they have in our ability to represent and educate them about the outcomes of our actions which requires that we engage in ongoing evaluations of our efforts. The results of our efforts must be transparent and invite public comment. We recognize that we must strengthen the accountability systems intended to ensure our students benefit from the most effective teaching practices identified in research. After extensive deliberation that included a review of research and information gained from listening tours, we have determined that the current proposed amendments to the APPR system are based on an incomplete and inadequate understanding of how to address the task of continuously improving our educational system. |
Therefore, we have determined that the following amendments are essential, and thus required, in the proposed emergency regulations to remedy the current malfunctioning APPR system.
What we seek is a well thought out, comprehensive evaluation plan which sets the framework for establishing a sound professional learning community for educators. To that end we offer these carefully considered amendments to the emergency regulations.
I. Delay implementation of district APPR plans based on April 1, 2015 legislative action until September 1, 2016.
A system that has integrity, fidelity and reliability cannot be developed absent time to review research on best practices. We must have in place a process for evaluating the evaluation system. There is insufficient evidence to support using test measures that were never meant to be used to evaluate teacher performance.
We need a large scale study, that collects rigorous evidence for fairness and reliability and the results need to be published annually. The current system should not be simply repeated with a greater emphasis on a single test score. We do not understand and do not support the elimination of the instructional evidence that defines the teaching, learning, achievement process as an element of the observation process.
Revise the submission date. Allow all districts to submit by November 15, 2015 a letter of intent regarding how they will utilize the time to review/revise their current APPR Plan.
B. Base 80% of teacher evaluation on student performance, leaving the following options for local school districts to select from: keeping the current local measures generating new assessments with performance –driven student activities, (performance-assessments, portfolios, scientific experiments, research projects) utilizing options like NYC Measures of Student Learning, and corresponding student growth measures.
C. Base the teacher observation category on NYSUT and UFT’s scoring ranges using their rounding up process rather than the percentage process.
III. Base no more than 10% of the teacher observation score on the work of external/peer evaluators, an option to be decided at the local district level where the decisions as to what training is needed, will also be made.
IV. Develop weighting algorithms that accommodate the developmental stages for English Language Learners (ELL) and special needs (SWD) students. Testing of ELL students who have less than 3 years of English language instruction should be prohibited.
V. Establish a work group that includes respected experts and practitioners who are to be charged with constructing an accountability system that reflects research and identifies the most effective practices. In addition, the committee will be charged with identifying rubrics and a guide for assessing our progress annually against expected outcomes.
Our recommendations should allow flexibility which allows school systems to submit locally developed accountability plans that offer evidence of rigor, validity and a theory of action that defines the system.
VI. Establish a work group to analyze the elements of the Common Core Learning Standards and Assessments to determine levels of validity, reliability, rigor and appropriateness of the developmental aspiration levels embedded in the assessment items.
No one argues against the notion of a rigorous, fair accountability system. We disagree on the implied theory of action that frames its tenet such as firing educators instead of promoting a professional learning community that attracts and retains talented educators committed to ensuring our educational goals include preparing students to be contributing members committed to sustaining and improving the standards that represent a democratic society.
We find it important to note that researchers, who often represent opposing views about the characteristics that define effective teaching, do agree on the dangers of using the VAM student growth model to measure teacher effectiveness. They agree that effectiveness can depend on a number of variables that are not constant from school year to school year. Chetty, a professor at Harvard University, often quoted as the expert in the interpretation of VAM along with co-researchers Friedman & Rockoff, offers the following two cautions: “First, using VAM for high-stakes evaluation could lead to unproductive responses such as teaching to the test or cheating; to date, there is insufficient evidence to assess the importance of this concern. Second, other measures of teacher performance, such as principal evaluations, student ratings, or classroom observations, may ultimately prove to be better predictors of teachers’ long-term impacts on students than VAMs. While we have learned much about VAM through statistical research, further work is needed to understand how VAM estimates should (or should not) be combined with other metrics to identify and retain effective teachers.”i Linda Darling Hammond agrees, in a Phi Delta Kappan March 2012 article and cautions that “none of the assumptions for the use of VAM to measure teacher effectiveness are well supported by evidence.”ii
We recommend that while the system is under review we minimize the disruption to local school districts for the 2015/16 school year and allow for a continuation of approved plans in light of the phasing in of the amended regulations.
Last year, Vicki Phillips, Executive Director for the Gates Foundation, cautioned districts to move slowly in the rollout of an accountability system based on Common Core Systems and advised a two year moratorium before using the system for high stakes outcomes. Her cautions were endorsed by Bill Gates.
We, the undersigned, wish to reach a collaborative solution to the many issues before us, specifically at this moment, the revisions to APPR. However, as we struggle with the limitations of the new law, we also wish to state that we are unwilling to forsake the ethics we value, thus this list of amendments.
Kathleen Cashin
Judith Chin
Catherine Collins
*Josephine Finn
Judith Johnson
Beverly L. Ouderkirk
Betty A. Rosa
Regent Josephine Finn said: *”I support the intent of the position paper”
i Raj Chetty, John Friedman, Jonah Rockoff, “Discussion of the American Statistical Association’s Statement (2014) on Using Value-Added Models for Educational Assessment,” May 2014, retrieved from:
http://obs.rc.fas.harvard.edu/chetty/value_added.html. The American Statistical Association (ASA) concurs with Chetty et al. (2014): “It is unknown how full implementation of an accountability system incorporating test-based indicators, such as those derived from VAMs, will affect the actions and dispositions of teachers, principals and other educators. Perceptions of transparency, fairness and credibility will be crucial in determining the degree of success of the system as a whole in achieving its goals of improving the quality of teaching. Given the unpredictability of such complex interacting forces, it is difficult to anticipate how the education system as a whole will be affected and how the educator labor market will respond. We know from experience with other quality improvement undertakings that changes in evaluation strategy have unintended consequences. A decision to use VAMs for teacher evaluations might change the way the tests are viewed and lead to changes in the school environment. For example, more classroom time might be spent on test preparation and on specific content from the test at the exclusion of content that may lead to better long-term learning gains or motivation for students. Certain schools may be hard to staff if there is a perception that it is harder for teachers to achieve good VAM scores when working in them. Overreliance on VAM scores may foster a competitive environment, discouraging collaboration and efforts to improve the educational system as a whole. David Morganstein & Ron Wasserstein, “ASA Statement on Using Value-Added Models for Educational Assessment,” Published with license by American Statistical Association, April 8 2014, published online November 7, 2014: http://amstat.tandfonline.com/doi/abs/10.1080/2330443X.2014.956906. Bachman-Hicks, Kane and Staiger (2014), likewise admit, “we know very little about how the validity of the value-added estimates may change when they are put to high stakes use. All of the available studies have relied primarily on data drawn from periods when there were no stakes attached to the teacher value-added measures.” Andrew Bacher-Hicks, Thomas J. Kane, Douglas O. Staiger, “Validating Teacher Effect Estimates Using Changes in Teacher Assignments in Los Angeles,” NBER Working Paper No. 20657, Issued in November 2014, 24-5: http://www.nber.org/papers/w20657.
ii Linda Darling-Hammond, “Can Value Added Add Value to Teacher Evaluation?” Educational Researcher, March 2015 44, 132-37: http://edr.sagepub.com/content/44/2/132.full.pdf+html?ijkey=jEZWtoE....
You need to be a member of School Leadership 2.0 to add comments!
Join School Leadership 2.0