Like any profession, every teacher in America finds work-related emails inundating their inbox at any given time. It might be curriculum companies ever eager to sell their collection of wares. It might be distance learning institutions assuring you that getting that masters degree/doctorate in education has never ever been more convenient.
One such piece of inbox filler caught my eye earlier this week. Actually, of all the educational email detritus that finds its way into my inbox on a weekly basis, this one has occasional value. It is a compendium of education-related stories culled together by the National Education Association. One article this week, in particular, caught my eye:
Which teachers hit home runs and boost student growth, and which ones strike out?
On Tuesday, the School Board plans to settle the question by hiring consultants from the University of Wisconsin.
The $3.4 million contract is part of Hillsborough County's seven-year, $202 million partnership with the Bill & Melinda Gates Foundation.
The university's task: use student tests to calculate each teacher's annual "value-added" contribution to the district.
The idea of trying to quantify the quality of educators is nothing new, of course. It is a pillar of most stabs at education "reform", dating back years, if not decades.
And, like most efforts to quantify teacher value, there are some life-changing career implications attached:
Beginning in 2011, the district hopes to use the value-added data — along with principal and peer evaluations — to help decide which teachers deserve tenure, promotions or dismissal.
By 2013, such information also will determine teacher pay.
Little in the public conversation about education draws a more dichotomous (to say nothing of heated) response than a discussion about the use of test scores to determine things like job security or teacher pay.
But the purpose of this essay is not to delve into all of these topics. Much ink and tears have been spilt in these teeth-gnashing discussions. Skeptics wonder aloud whether one test can adequately measure education, and they are concerned that high-stakes testing will create a web of externalities that will prove problematic (witness the scandal that erupted this week, when it was alleged that administrators in one Texas school were falsifying tests in order to claim their school's share of the merit pay pie).
To keep this under 40,000 words, the focus will be kept narrow: this notion of "value added education", and the desire of Hillsborough County to greatly change the way they retain and compensate teachers based on the end of a mathematical equation.
The rancor in any discussion about teacher accountability is largely borne of the mutual distrust (and, it must be admitted, a dash of pure animosity) between the two camps. The "pro-reform" group sees teachers as self-interested and reeking of entitlement, ever fearful of actual accountability for what they do in the classroom. On the other end of the spectrum are people (including many, but not all, in the teaching profession) who think the "reformers" are people with a thinly-veiled contempt for teachers, who often understand very little about the day-to-day machinations of the classroom.
I have been a classroom teacher for well over a decade, at this point, so I would not dare pretend that I don't have a dog in this fight.
That said, there is something at least a little meritorious behind the intentions of discovering objective standards to measure educational quality.
One of the reasons why teacher pay has long been based on factors like years of experience and postgraduate units acquired is that these measures resist subjectivity. Administrative review has long been resisted, on the grounds that excellent teachers who might "rock the boat" with their administration could find themselves on the outs as a result (this was also one of the primary premises behind the notion of tenure).
Using student evaluations also has its own perils, as Rob Horning recently noted:
[By using student evaluations] you are undermining the teachers’ authority in the classroom, making them servants to student’s whims—essentially entertainers rather than educators. And you are putting teachers in a position where they have (a degrading) incentive to shop for the most tractable and capable students, and only the worst (or most impossibly idealistic) teachers will consent to teach the most difficult-to-reach students.
So, given that subjective measures of "teacher quality" are left wanting, the notion of quantifiable measurements is no doubt attractive.
But is it attainable?
Give the University of Wisconsin researchers some credit. Unlike a lot of early shots at quantifying the quality of teacher performance, the UW crowd at least attempts to hold teachers accountable only for their year of work on the children in question.
That is the logic behind this notion of "value-added" education. The measurement, therefore, is not the overall attainment of knowledge in a particular curriculum area (only a fraction of which a current teacher could reasonably be held responsible), but merely the degree to which additional knowledge or skills were acquired in the given time frame.
But this effort only addresses the worst and most basic liability with the notion of quanitifiable education: the idea that an 11th grade teacher is somehow going to fix everything that went wrong for the first 16 years of a youngster's education in approximately 180 hours.
There is still a base issue that exists in these quantification efforts, and it is one that even its adherents will sometimes grudgingly acknowledge: some things in the educational life of a student simply can't be quantified, and efforts to do so put blame on the teacher for things inherently beyond their control.
Folks like the University of Wisconsin research crew actually seem disinclined to agree with this dispute of their premise, arguing that all things are quantifiable:
Some Hillsborough teachers worry they'll be penalized for teaching low-income students or those with special needs.
But even students with severe handicaps can be tested for growth, Steele said, vowing to make sure such measures are fair.
Thorn said value-added measurements are designed to filter out student differences in income, family background and other disparities to determine how much teachers contribute.
Color me somewhat incredulous to this assurance. It seems, to say the least, unduly optimistic to declare that a mathematical formula can be devised that eliminates the disparities in education caused by "family background."
The assumption there is an almost certainly errant one. Assume for a moment that you could calculate the value, say, in having parents who own postgraduate degrees versus parents with less than a high school education.
Doesn't that automatically presume that all students of parents with little education value the education of their children the same?
Pretty much any teacher would tell you otherwise.
How could a mathematical formula answer how a student's capacity or motivation for test-taking dissipates when that student's parents are in the midst of a divorce? It can't, because divorce, like any number of other impactful events in the life of a young person, does not engender a universal reaction vis-a-vis education. I have students whose entire educational experience has been torn asunder by the spectre of the marital dissolution of their parents, and I have had students whose performance in the classroom was scarcely impacted.
Should teachers now have the opportunity to pre-screen their students, to make sure that no divorces are pending in the family, or that the student doesn't have the annoying tendency to fall in and out of love on the hour? This is intended to sound preposterous, but in reality, if my livelihood, or the compensation I receive for said livelihood, is on the line, these are little snippets of information I would dearly like to know.
This is not meant to say that I don't think research efforts like those underway at the University of Wisconsin and elsewhere don't have merit. But it is important to acknowledge that there are limits to what we can reduce to numbers, and that such measurements are far from infallible.
Indeed, this is something that the VARC (Value Added Research Center) seems to acknowledge on their website:
Much basic research remains to be done to build high-quality value-added models and indicators that can legitimately support district and state accountability and high-stakes applications such as pay for performance.
There is little harm, it would seem, in universities devoting time and attention to trying to create such models. Inherent liabilities like the ones described earlier would seem to be barriers that will prove difficult to dislodge.
The greater concern here isn't that academic institutions like the VARC exist, it is that school districts seem eager to make major decisions, with enormous implications for their classroom teachers, using tools that even the designers themselves admit are a work in progress.
To raise questions about this would not seem to be an intractable resistance to reform or accountability. It would seem to be well-placed skepticism.
To base things like compensation, and even career continuation, on the promises from program creators that their measurements will be "fair" would seem to invite some level of deserved resistance.