Ask the Students

Brown Center Blackboard - Brookings Institute

Teacher Jenna Rosenberg speaks to her first grade class at Walsh Elementary School in Chicago, Illinois (REUTERS/Jim Young).

Student surveys are ubiquitous in higher education as a means of evaluating teaching. (In fact, they are often the only source of feedback on classroom instruction for college professors.) But, until recently, they were quite rare in K-12 education. As state and district leaders redesign their teacher evaluation systems, they should consider adding student surveys to the set of measures included in teacher evaluation systems.  As we learned in the Gates Foundation’s Measures of Effective Teaching project, student surveys have a number of advantages:

  1. Relationship to student achievement gains: We tested the predictive power of student surveys by comparing a teacher’s score on the Tripod Survey (developed by Ron Ferguson at the Harvard Kennedy School of Government) to their effectiveness in raising test scores with a different group of students or in a different academic year.  After adjusting for measurement error, the correlation was between 0.3 and 0.4 in mathematics and 0.1 and 0.3 in English Language Arts. In other words, the teachers who scored higher on the student surveys saw higher achievement gains.
  2. Reliability: The student surveys were the most reliable of the measures we tested (that is, least volatile from year to year), especially in middle school. The reliability of student surveys derives from the power of averaging.  Even if an adult is a more discerning evaluator of a teacher’s practice than the typical elementary or middle school student, classroom observations typically average over one or two observers.  However, the typical elementary classroom has roughly 20 students and the typical middle school teacher works with 75 to 100 students, spread across multiple sections.  In addition, rather than averaging over 2 or 3 lessons, students are present for 180 days.
  3. Improving Practice: Although student achievement gains or “value-added” measures provide predictive power (that is, they help identify teachers likely to see similar student achievement gains with future students), they offer little diagnostic power for identifying specific aspects of a teacher’s practice which deserve attention. In contrast, student surveys, like formal classroom observations, offer the chance to identify areas where a teacher could improve. The power of student surveys and formal classroom observations to drive changes in practice could be enhanced by aligning the language of the surveys with the language of the teaching standards.
  4. Cost and Coverage: Relative to the cost of observations by trained adults, or the cost of adding new assessments in untested grades and subjects, student surveys are a relatively low-cost way of providing additional sources of data for individual teachers. In the MET study, the youngest students we surveyed were in fourth grade and the oldest were in 10th grade. In these grades, the student surveys could be used to provide additional coverage in subjects such as social science, science, history, art, etc. where student assessments are often available. Future work should investigate the predictive validity and reliability of student surveys in younger grades.
  5. Emotional Salience:  One of the potential strengths of student surveys is that they are measured in a currency that teachers inherently value— the perspective of students. A merit pay system attaches financial incentives to other measures—such as classroom observations or student achievement gain measures— to artificially attach value to those measures. However, to the extent that teachers inherently value what their students have to say, and care about whether their students rank them relative to their peers in responding to statements such as “We use time well and we don’t waste time” or “When I turn in homework, I get useful feedback which helps me improve,” then it may not be necessary to attach financial incentives to provoke the desired responses from teachers.                

There are only a few places to look for independent sources of feedback on a teacher’s practice. Student achievement gains or “value-added” measures are valuable when they are available, but less than a quarter of teachers work in tested grades and subjects. Classroom observations by principals are another source, but it is costly to add observations by other observers from outside the school. Student surveys are a natural place to turn for an additional source of feedback for teachers. Outside the tested grades and subjects, student surveys may be the only source besides the teacher’s principal. As such, student surveys would be a valuable source for balancing or confirming those judgments.

Of course, we must be mindful that attaching high stakes for teachers to information from student surveys may introduce pressures to distort those measures. After all, some college professors have been known to chase higher student evaluation scores by being easy graders. One of the best ways to reduce this tendency is to use multiple sources of information, and not just one metric, for making important decisions about teachers. Meanwhile, through the MET project, we’ve learned what types of relationships to expect between student survey measures, student achievement gains and observations. States and districts should monitor the relationships among the various measures. If students or teachers begin abusing the student surveys (or another one of the measures), an early warning sign would be the breakdown of those relationships.

The following relevant reports can be found at www.metproject.org:

The Bill & Melinda Gates Foundation, Learning about Teaching: Research Report (Seattle, WA: The Bill & Melinda Gates Foundation, 2010)

Thomas J. Kane and Douglas O. Staiger, Gathering Feedback for Teaching: Research Paper (Seattle, WA: The Bill & Melinda Gates Foundation, 2012)

Thomas J. Kane, Daniel F. McCaffrey, Trey Miller, Douglas O. Staiger, Have We Identified Effective Teachers?: Validating Measures of Effective Teaching Using Random Assignment (Seattle, WA: The Bill & Melinda Gates Foundation, 2013)

Kata Mihaly, Daniel F. McCaffrey, Douglas O. Staiger and J.R. Lockwood, “A Composite Estimator of Effective Teaching” RAND Working Paper, January 8, 2013.

  • Thomas Kane is professor of education and economics at the Harvard Graduate School of Education and faculty director of the Center for Education Policy Research, a university-wide research center that partners with states and districts to evaluate innovative policies.   He was deputy director in the education group at the Bill & Melinda Gates Foundation, where he directed the Measures of Effective Teaching project.

Views: 210

Comment

You need to be a member of School Leadership 2.0 to add comments!

Join School Leadership 2.0

Comment by Kirk Cunningham on April 19, 2013 at 6:49pm

Bravo Mr. Kane, I would only add that Sudbury Valley Schools have understood this correlation for nearly 50 years. At the end of each school year students (and staff, one person one vote, a real democracy) vote on which staff to retain. Guess what? There havent been any Lord of The Flies episodes, just kids looking out for their best interests.

JOIN SL 2.0

SUBSCRIBE TO

SCHOOL LEADERSHIP 2.0

School Leadership 2.0 is the premier virtual learning community for school leaders from around the globe.  Our community is a subscription based paid service ($19.95/year or only $1.99 per month for a trial membership)  which will provide school leaders with outstanding resources. Learn more about membership to this service by clicking one our links below.

 

Click HERE to subscribe as an individual.

 

Click HERE to learn about group membership (i.e. association, leadership teams)

__________________

CREATE AN EMPLOYER PROFILE AND GET JOB ALERTS AT 

SCHOOLLEADERSHIPJOBS.COM

FOLLOW SL 2.0

© 2024   Created by William Brennan and Michael Keany   Powered by

Badges  |  Report an Issue  |  Terms of Service