A Stinging Critique of the Measures of Effective Teaching (MET) Project

A Stinging Critique of the Measures of Effective Teaching (MET) Project

(Originally titled “The MET Project: The Wrong $45 Million Question”)

 

In this Educational Leadership article, Rachael Gabriel (University of Connecticut/ Storrs) and Richard Allington (University of Tennessee/Knoxville) criticize the Measures of Effective Teaching (MET) project for reducing its initial question – How can we identify and develop effective teaching? – to a much narrower one – What other measures match up well with value-added data? The MET team assumed that value-added scores were the gold standard and judged other possible ways of assessing and improving teachers against it.

“Although we don’t question the utility of using evidence of student learning to inform teacher development,” say Gabriel and Allington, “we suggest that a better question would not assume that value-added scores are the only existing knowledge about effectiveness in teaching. Rather, a good question would build on existing research and investigate how to increase the amount and intensity of effective instruction.” In pursuit of that goal, they pose five questions:

Do evaluation tools inspire responsive teaching or defensive conformity? That is, do teacher-evaluation rubrics and checklists assume there is one right way to teach? For example, KIPP (Knowledge Is Power Program) schools require students to SLANT (Sit up, Listen, Ask questions, Nod, and Track the speaker with your eyes). “At the end of the day,” say Gabriel and Allington, “we don’t care whether teachers ask students to SLANT or stand on their heads… The educational value behind such indicators is rooted in the idea that there’s a physical aspect to learning and that student engagement is important to learning. However, students will display these behaviors differently across different settings.” 

Do evaluation tools reflect our goals for public education? Gabriel and Allington say we need to “lift our eyes from lists of indicators and see whether classroom practice actually reflects the education we want for our children… We would argue that unintended effects are more frequent when teachers perform specified behaviors for the purpose of meeting evaluation requirements rather than as expressions of their professional judgment, inquiry, and reflection.” Similarly, they say, hospitals have run into trouble when they put too much emphasis on outcomes (patient success rates) rather than values (excellent treatment for all). The MET team has argued that we should accept imperfections in teacher-evaluation tools because they’ll be compensated for by strengths in other tools. “Yet when we use these flawed measures to evaluate teachers,” say Gabriel and Allington, “they become expressions of what matters in teaching.”

Do evaluation tools encourage teachers to teach literate thought? Simplistic observation checklists can lead administrators to mistake students quietly filling out low-level worksheets for engaged learning, or criticize a teacher for allowing a student to call out when the student is enthusiastically engaged. Gabriel and Allington believe we need to frame our goals broadly – getting students to be literate thinkers – and structure the evaluation process so everyone is focused on that goal and looking for evidence in broad and meaningful terms – for example, Are students reading independently? Are they talking with each other about what they are learning? Are they taking part in class discussions?

Do evaluation tools spark meaningful conversations with teachers? Gabriel and Allington suggest that administrators set aside detailed checklists and focus on questions like these: Are students in this classroom engaged? How do you know? If some are not, why not, and how can I help? “If teachers are unaware of what’s happening in their classrooms and don’t know how to reach more students, they need coaching and conversation,” say the authors. These kinds of conversations are far more substantive and helpful than looking at value-added testing data, they argue: “There is a well-documented set of concerns about value-added measurement in terms of error rates, reliability, model differences, and even exclusionary practices.” 

Do evaluation tools promote valuable educational experiences? Referring to the $45 million price tag (so far) of the MET project, Gabriel and Allington conclude, “The questions that deserve million-dollar price tags should be those that we pose as educators every day: Are students experiencing the education we hope for them? How do we know? If some are not, how can we help?” 

“The MET Project: The Wrong $45 Million Question” by Rachael Gabriel and Richard Allington in Educational Leadership, November 2012 (Vol. 70, #3, p. 44-49), www.ascd.org; the authors can be reached at rachael.gabriel@uconn.edu and rallingt@utk.edu

 

From the Marshall Memo #460

 

Views: 310

Comment

You need to be a member of School Leadership 2.0 to add comments!

Join School Leadership 2.0

JOIN SL 2.0

SUBSCRIBE TO

SCHOOL LEADERSHIP 2.0

School Leadership 2.0 is the premier virtual learning community for school leaders from around the globe.  Our community is a subscription based paid service ($19.95/year or only $1.99 per month for a trial membership)  which will provide school leaders with outstanding resources. Learn more about membership to this service by clicking one our links below.

 

Click HERE to subscribe as an individual.

 

Click HERE to learn about group membership (i.e. association, leadership teams)

__________________

CREATE AN EMPLOYER PROFILE AND GET JOB ALERTS AT 

SCHOOLLEADERSHIPJOBS.COM

FOLLOW SL 2.0

© 2024   Created by William Brennan and Michael Keany   Powered by

Badges  |  Report an Issue  |  Terms of Service