The Capital Region BOCES recently sponsored a useful conference entitled Ready, Set, Grow!: School Improvement through Value-Added Analysis. Value-Added is clearly an idea whose time has come, and the conference was co-sponsored by the New York State School Boards Association, the New York State Council of School Superintendents, and the School Administrators Association of New York State. If you looked in the eyes of the attendees - it was a crowded house - there was a mixture of attentiveness and apprehension.

One can think of accountability systems in geometric terms. New York currently calculates Adequate Yearly Progress (AYP) by comparing points: the performance of a group of students at a point in time is compared to a state standard. You're either there (AYP), or you're not.

Chapter 57 of the Laws of 2007 requires New York to have a Growth Model in place by the 2008-09 school year. A growth model is a line with slope. The same group of students is measured at two or more points in time, and the slope will be positive, negative, or zero - signifying growth, regression, or stasis. Districts not yet at AYP may still be able to demonstrate that students are on trend to get to there, soon. Demonstration of growth will allow a more nuanced analysis, and may become a new form of "safe harbor" for districts whose progress is masked by the current point-based system.

This same law requires that New York implement a Value-Added Model by the 2010-11 school year, subject to all sorts of conditions and approvals. With a value-added model, we will have two lines, each with its own slope: one representing expected growth and the other observed growth. A further level of analysis will be possible: schools whose students have not made AYP (the point), nor are on a timely track to reach AYP in the future (the line), may still be able to demonstrate that observed growth has a larger positive slope than would be expected if the district/school/teacher had done nothing.

Crudely put, a value-added model is a statistical method to demonstrate "better than nothing." Schools that are considered under-performing in a point-based AYP model, and not-on-track in a line-based AYP model, may still produce student growth that exceeds expectation (i.e., has value). A value-added model will also identify high-achieving schools that produce no growth, beyond what would be expected had the district/school/teacher done nothing.

In a value-added model, the key issue becomes how to determine expected growth. How do we calculate the slope of the predicted line?

The solution is a tangle of statistics, involving multiple measures, differences, correlations, and covariates, but the basic idea is that future growth is predicted based on past achievement, controlling for student/school characteristics. Some value-added models collect multi-year data and have students serve as their own statistical control; others have fewer repeated measures but use demographic variables as covariates to reduce error.

Which brings me back to the look in the eyes of the conference attendees. The policy implications of these developments are enormous and exciting; the statistics are complex. Some vendors claim the statistics are so complex - and valuable - that they are proprietary, a secret. Thankfully, New York has declared it will not adopt a secret model!

Despite the stamp of approval proffered by legislators, professional organizations, and individual testimonials, there is still a healthy debate within the field about the conditions under which a value-added model is, or is not, valid. The concern is that those who will create, comment on, or implement the policies surrounding a value-added model may not fully understand the underlying technical issues, and will therefore be overly cautious, overly eager, or - worse - will defer key decisions to the recommendations of those who claim to understand. Explaining this to the public is another matter entirely.

We often make decisions based on complex and poorly understood calculations (e.g., stock derivatives). But the mandatory adoption of a value-added model is an example in which students, schools, and teachers will be informed - and judged - in a profoundly different manner. Although this may be a new age of data collection, reporting and analysis, transparency, curiosity, and inclusiveness are timeless virtues when we have so much at stake. The leadership at the State Education Department is in the midst of a healthy debate on growth and value-added models, with an established time-line that includes expert guidance and opportunities for public comment.

Expectation is a funny thing. We know that higher expectations produce better outcomes, so long as the expectation is not so unreasonable as to produce hopelessness and apathy. If expectation is managed properly, students, teachers, and schools tend to meet the challenge. How will value-added models maintain the balance between statistical expectations based on past behavior, and high-but-not-too-high expectations based on future goals?

What do you think about growth and value-added models? What about the implications for guiding and evaluating students, teachers, schools, and districts? Please join the discussion!

Views: 143

Replies to This Discussion

Ken,

This is an excellent conversation to start, and I happy to add to the "conversation." I have been keeping a keen eye on growth models and value added models, and also keeping in the back of my head the promises that were made to us by a State official at the DATAG conference last summer. These three promises were that these new systems would be transparent, not require any new assessments, and not add any substantially new data elements to be reported. Based on my understandings of these systems, I do not see that it is possible to keep any of these promises. First, value added models will require some form of higher order statistics, probably regression analysis, correlations, and ANCOVA (or some combination of all three). While I understand these analyses mathods and how they work, most of us are not statisticians do not, and they are certainly not transparent. Second, wihle I don't think we will end up with entirely new assessments, our assessments will all need to be on the same scale, including Regents Exams. While a presentation was made at the last DATAG conference (the concept of the TCR, test comparison range) that might provide a good solution and not require a vertically moderated scale, we could end up with tests that are longer, like the 4th and 8th grade assessments. This will complicate scoring procedures and a whole host of other issues. Lastly, the coming of value added models will certainly require the collection and reporting of additional data elements. My understanding is that the State wants to be able to control for certain variables that we do not currently collect.

So, we have some challenges ahead of us. The two biggest issues that I think we face in the immediate future are in-district resources and the discorrect between SED and vendors like Infinite Campus and IEP Direct. In the age of accountability, budgets are tighter than ever. LI made out well with State Aid this year, but most of the increase went to reducing the tax burdon. Most of us CIO's wear several hats, and we are left without sufficient resources to keep up with the current flood of paperwork, verifications, and data analyses. Few districts in NY are fortunate to have a dedicated CIO. Second, SED has historically not done a good job with vendors. Vendors need at least 6 months time to modify their products so that we can accurately report data through Level Zero and into the Data Warehouse. This is often very frustrating.

Well, I do not want to present too much gloom and doom, but I do hope I added to this conversation. We do have some leadership challenges on the horizon, and I look forward to meeting them "head on" and collaboratively working to resolve them.

Please continue the discussion!
Hi Tim,
From what I understand, SED can still abide by the conditions you mention and move forward with a growth/value-added model.
A confusing set of statistic computations can still be "open," so long as the algorithms are public. To the extent that a system cannot easily be explained, however, people will question the validity as soon as they are negatively affected by the conclusions.
It does appear unclear whether Regents exams can be implemented into the same value-added system that includes our current 3-8 assessments. We were reassured by the vendor at the conference, however, that changes to the 3-8 assessments would not be necessary to move forward.
These changes will certainly strain school district capacity to adequately staff data initiatives with enough FTE's in the CIO and support staff roles.
Thanks for your comments. I hope you and others continue to weigh in on this important issue.
Ken
Thanks Ken. I really liked the idea of the TCR, test comparison range. This may be a useful tool in terms of establishing a scale that can be used across grades 3-HS. I can't wait to see how this all plays out. I have enough trouble explaining how PI's are calculated. I can;'t wait to explain ANCOVA!

Take care.
I still struggle to explain the perils of grade equivalents!
I was not at the DATAG conference you mentioned. When you get a chance, maybe you could tell us a little bit more about the discussion as it relates to growth/value added models?
Ken
At the Spring DATAG conference, Dave Peelle and Tony Tripolone continued the discussion from the December meeting on showing status and change in assessment scores from multiple schools and regions. They entitled their discussion, "New Methods of Indicating Status and Change." .
They introduced the idea of the TCR, or test comparison range. They used this range to create an interval-level and comparable variable in grades 3-8 for math and ELA. The TCR is actually pretty slick! They used it to scale the 3-8 exams to one uniform scale between 1 and 99. The performance levels break out as 1-25 (level 1), 26-50 (level 2), 51-75 (level 3), and 76-99 (level 4). There was a reason they did not use 0 and 100, but I can not recell. It might have been that they needed 2 digits for each number. If you want, after the next CIO meeting I could show you my notes and we could sit down and look at it together. Take care...
Ken & Tim:
I like the TCR idea. I saw Dave Peelle discuss something like this several years ago. I think the concept provides a (relatively) simple way to measure growth of a group over time. However, one of the concerns I have about this is that if a district is using it in isolation from other districts, it may not be that useful because there is no common language, or broad set of data to examine. Maybe there could be a way that the RIC could generate this type of data for all districts that for whom they process scores. Just a thought . ..

Joe Stern
I was secretly hoping that SED would adopt this, or a similar model for use in all districts. Time will tell... and time is running short because it would appear that a growth model needs to be implemented by September, 2008.
Hi Everyone,
I responded today to Ken's initial posting. He suggested that I join this group and be part of the conversation here. I just read through it, and it looks exciting. So, I'll drop my message to the DATAG listserv here. Ken, thanks for inviting me!I have been surprised not to see any responses to Ken's evaluation of how growth and value-added will be incorporated into the accountability system in New York State. I have been involved in the pilot out of Capital Region BOCES from the start, and have attended the training every year with schools and staff developers from my districts. I just can't get enough of it. There is so much information from Value-Added analysis that can be used for school improvement purposes (note: I did not say accountability). With regard to Ken's analysis, there are a few places where I'd put in my two-cents, and here they are:

Black Box
Most of the time, the "black box" comment refers to the value-added methodology of Bill Sanders and his company, SAS, out of North Carolina. This is the model that has been used in the Value-Added pilot out of Capital Region BOCES. It's a serious misunderstanding, because the Sanders model is not a black box. Certainly, he does not give away information about his computational power and data. That's the proprietary methodology that makes him money. That said, the mathematical formula he uses has been widely written about. White papers have been written, scholarly papers, and if anyone is willing to read "Grading Teachers, Grading Schools: Is Student Achievement a Valid Evaluation Measure?" by Jason Millman, you can find it right there. There has been more written about the methodology and formula behind this Value-Added model than has been written about the post-equating process for the 3-8 tests in New York State. New York State is developing it's own Growth and Value-Added model, and for almost all of us in the state, it will be a black box. The people at NYSED who actually crunch the numbers will be the only ones who understand how their own model works. They could understand how the SAS model works, as well, if they don't already.

"Better Than Nothing"
I cringed a bit when I read that characterization of value-added. For me, Value-Added is "Finally Something". The current system does compare a group of students against a standard, as Ken said. However, when we look at how we're improving, we ask the question "Are increasing numbers of 4th graders meeting the standard on the 4th grade assessment in ELA?" This compares the performance of last year's 4th graders with this year's 4th graders, and it is incredibly limited. On a large, global level, it might show a trend of improvement because of all the work that has been done developing standards and incorporating them into the school curriculum. Beyond that, it doesn't tell us much about how schools are improving. In my region, where schools are small, and entire grades have 25-30 students, the achievement in the 4th grade is as much a function of where the students started the year as it is what the school did during the year, if not more. The bottom line is that socio-economic factors continue to heavily influence achievement. We need a measure that will show us what value our school has contributed to the learning of students. Value-Added gives us this "finally something". With Value-Added, we can see the effect that schools have on the growth of all of our students - not just the ones who are struggling to reach the minimum standard. All of our students deserve a year's worth of growth each school year. Value-Added can give you information about whether that is happening. Achievement data cannot.

Accountability vs. School Improvement
Ken's message to us was about accountability. We have to remember how different accountability and school improvement are. We look at data in different ways with each of them. Accountability causes us to protect ourselves and worry about the conclusions that will made about us as educators because of the data. We don't have to be that way when we look at data for school improvement purposes. We can be less suspicious, and let the data tell us what we need to hear to improve instruction for all students. Ohio has adopted one value-added measure for accountability purposes and a second method for school improvement. Perhaps we need the same thing in New York.

So, that's my two cents.
David Rutherford
Director of Instructional Support Service
ONC BOCES
I really like the difference between accountability and school improvement that David pointed out. I think this delineation is important. I have often reflected on what “accountability looks and feels like” in districts and schools. I think all too often we focus on “who should be held accountable” rather than what the data tells us about what needs to be improved. This is important. During my data presentations to teachers and administrators, I have tried to stress that the data points out student learning deficiencies and where we need to mobilize resources. Good teaching and solid leadership can then be used to address these learning deficiencies. I have all but dropped the word “accountability” from my interactions with teachers and administrators. In many instances I have found the word accountability to take on the meaning of blame, and this takes us in a very dangerous direction. In most cases I have replaced “accountability” with words and phrases like “shared responsibility” and “teamwork.” In the words of John Wooden, “Trust is everything.” Over time, blame erodes trust. If this paradigm takes over, very little school improvement will be possible.
I agree that none of this makes any sense if it is divorced from efforts to improve schools. I also agree that accountability initiatives run the risk of extending no further than coercion.

That said, shouldn't our efforts to improve schools be held up against some reasonable measure of success? We are not all intrinsically motivated to improve, which I suppose is one of the reasons why external accountability systems sprang up in the first place.

I remember my days in a middle school in Nassau County in which we regularly sat around the table at team meetings eyeballing spreadsheets, looking for patterns, and asking questions. The data guided and structured our conversations and our decisions. We did it because it was the right thing to do for kids and for our profession. Accountability would have judged us kindly, but that judgment would have been superfluous to what we were trying to do.

A major criticism of the coercion model of accountability is that it sucks the joy out of teaching and learning. Accountability models will truly have grown up when what we do works, is judged to be successful by external standards, and, just as important but usually underestimated, we enjoy our efforts to teach and learn.

RSS

JOIN SL 2.0

SUBSCRIBE TO

SCHOOL LEADERSHIP 2.0

School Leadership 2.0 is the premier virtual learning community for school leaders from around the globe.  Our community is a subscription based paid service ($19.95/year or only $1.99 per month for a trial membership)  which will provide school leaders with outstanding resources. Learn more about membership to this service by clicking one our links below.

 

Click HERE to subscribe as an individual.

 

Click HERE to learn about group membership (i.e. association, leadership teams)

__________________

CREATE AN EMPLOYER PROFILE AND GET JOB ALERTS AT 

SCHOOLLEADERSHIPJOBS.COM

FOLLOW SL 2.0

© 2024   Created by William Brennan and Michael Keany   Powered by

Badges  |  Report an Issue  |  Terms of Service