In a recent contribution to the growth/value-added discussion on the DATAG listserv, it was stated that "We invite comment from any of our DATAG Colleagues but are especially interested in the opinions of our resident statisticians." The arc of this discussion is exactly what I had feared - and predicted in my initial posting on May 31.

It runs like this: Growth and value-added models are extremely technical. Here, try to read these extremely technical articles. If you do not understand them, defer to the judgment of the statisticians.

The adoption of a growth or value-added model is a public policy decision, informed no doubt by the recommendations of mathematicians, statisticians, and research scientists, but ultimately made by elected and appointed public officials. Any adoption must be supported by a tax-paying public.

Let me offer an example of why we must always question the opinions of experts. It was stated with authority via the DATAG listserv that "The Rand Corporation, one of the country's most esteemed think tanks, have expressed that Sanders' methodology is the 'most preferred' model for value-added calculations."

The Rand report, in fact, concluded that "Full multivariate analysis of the data is flexible and uses correlation among multiple years of data. This approach is likely to be preferable but is computationally demanding" (p. xvi). The Rand report is endorsing a methodology (i.e., "full multivariate analysis" with "correlation among multiple years of data") not any particular brand (e.g., Sanders or TVAAS).

On page two of the full report (not distributed via the listserv), the Rand authors speak of the Sanders model as "The most prominent implementation of this approach . . ." but acknowledge that other folks have attempted similar solutions (e.g., Webster and colleagues). On page 63 of the report, they state "The primary disadvantage of the multivariate models is extreme computational burden . . . While progress is being made to overcome these computational challenges (Rasbash and Goldstein, 1994; DebRoy and Bates, 2003), widely available and flexible solutions are still lacking."

Further evidence that current solutions were "still lacking," is the fact that the authors of the Rand report came out with their own model a mere year after the "endorsement" referenced in the DATAG posting. (This Rand model can be found in McCaffrey D., Lockwood J.R., Koretz D., Louis T., Hamilton L. (2004), Models for Value-Added Modeling of Teacher Effects, Journal of Educational and Behavioral Statistics, 29(1), 67-101).

Finally, this DATAG statistical authority acknowledged that the Sanders model does have a proprietary element - "The only proprietary part of Sanders' work is his team's solution for seeding the estimation algorithm so that the procedures used for calculating the covariance parameters converge. Otherwise, the computer would likely choke, given the model's utilization of all student test data across subjects, time, and grades."

Although no one seems to like the word "Secret," Sanders and his colleagues are charging a fee for the very same reduction of "computational burden" that the Rand report says is critically necessary. The avoidance of paying this fee is, I assume, at least one reason why the authors of the Rand report came up with their own solution.

There is no doubt that the Sanders model works and can help schools improve. As long as schools understand what they are purchasing, I think the Capital Region BOCES is providing a great service to its customers. But the New York State legislature has mandated the adoption of a growth/value added model. I am not comfortable with the mandatory adoption of any state-wide model that, because of its secret elements, can never be subject to independent replication and verification. I am not alone in my concerns.

Johanna Duncan-Poitier, New York's Senior Deputy Commissioner of Education, stated in her June 23 memo to the Board of Regents -

"The interim growth model should be based on an open architecture; that is the New York State Education Department (NYSED) will publish exactly how it calculates growth decisions and the result will be a single, clear, unambiguous determination of AYP for each English language arts and mathematics accountability criterion" (p. 6).

The Council of Chief State School Officers (CCSSO) stated in their 2005 report, Policymakers' Guide to Growth Models for School Accountability: How do Accountability Models Differ?, that

"Further, due to proprietary estimation procedures, broad applications of this model [Sanders's TVAAS model] independently by states are not possible. Hence cost is an additional factor. Further, using models that contain complex (and proprietary) computations which are inaccessible to stakeholders may make it harder to build consensus and a sense of confidence around the validity of the results" (p. 16).

These are pretty clear denunciations, at both the state and national levels, of large-scale implementations of a "secret" model.

(To be fair, it appears that New York's testing program already uses two proprietary programs - "ITEMWIN" for test item selection and "FLUX" for scoring tables. Perhaps these sneaked under the political radar!)

Finally, this discussion must be anchored in the role that science plays in crafting public policy. A mathematical model is useful only to the extent that it informs decisions. Educational decisions must be subject to empirical verification of equity and efficacy. The scientific method operates via independent and public replication, as well as potential falsification.

Sanders, Saxton, and Horn (1997) declare that "research initiatives are a priority for TVAAS [the Tennessee Value-Added Assessment System]. The enormous, longitudinally merged database . . . is a unique resource for research into educational issues" (p. 141). Indeed, the ability to conduct research on the so-called "teacher effect" - the instructional value added by an individual educator during a specified period of time - is one of the primary justifications for the adoption of a value-added model.

Sanders now sells his model via a company called SAS. Their website features an appealing (or appalling) pitch for Sanders's new model, SAS EVAAS (see http://www.sas.com/govedu/edu/services/effectiveness.html) -

"Schools can benefit from SAS EVAAS analyses without having to invest in new hardware, software or IT staff. Instead, states or districts send electronic data directly to SAS, where the data is cleaned and analyzed. The results are then reported via a secure Web application, a powerful but user-friendly diagnostic tool."

The secrecy at work is buried in a "Pay Us" because you can "Trust Us" marketing campaign. That approach may be convenient, powerful, even helpful, but it is not scientific. That approach can never contribute to science, because secrets can never be publicly replicated or falsified by independent investigators.

Anyone who tells you different is either mistaken - or selling something.

What do you think?

Views: 70

Attachments:

JOIN SL 2.0

SUBSCRIBE TO

SCHOOL LEADERSHIP 2.0

School Leadership 2.0 is the premier virtual learning community for school leaders from around the globe.  Our community is a subscription based paid service ($19.95/year or only $1.99 per month for a trial membership)  which will provide school leaders with outstanding resources. Learn more about membership to this service by clicking one our links below.

 

Click HERE to subscribe as an individual.

 

Click HERE to learn about group membership (i.e. association, leadership teams)

__________________

CREATE AN EMPLOYER PROFILE AND GET JOB ALERTS AT 

SCHOOLLEADERSHIPJOBS.COM

FOLLOW SL 2.0

© 2024   Created by William Brennan and Michael Keany   Powered by

Badges  |  Report an Issue  |  Terms of Service