I worked two summers as a full-time temporary employee through a nationwide temporary agency outsourced by McGraw-Hill. The following is a description of the hiring, training, and scoring processes.
Hiring
Newspaper classified adds and word of mouth marketing were used to advertise for the position of scorer (grader). There requirement for the position was a four year college or university diploma. Experience in education was not required but it was considered a plus.
The interview for employment consisted of the following:
Documentation: transcript from college or university degree was earned
Shift availability
There were three shifts available in 8, 10, and 12, hour intervals seven days a week, weekends optional.
First shift: 8 am to 5 pm, 7 pm, or 8 pm
Second shift: 4 pm to 11 pm, 1 am, or 3 am
Third shift: 7 pm to 3 am, 5 am, or 7 am
I worked first or second shift varying between the 8 and 10 hour intervals.
Subject Scoring Preferences
The choices were Math, Science, English, Social Studies and History. The items to be scored consisted of short answer responses, essays, and a few multiple choice questions ranging from grade levels 3-12.
Scorer Pay Scale
Scorers earned approximately $10-15 an hour depending on experience in education. There was approximately a $1.10 added differential for third shift. A raise was given of approximately $.50 after 30 days, repeated in intervals according to previously set benchmarks for hours worked.
No Benefits (temporary employee).
It is important to note outsourced temporary agencies invoice clients like McGraw-Hill approximately two times the hourly temp wage plus 35%-45% of the gross for each temporary employee (I have also worked for a temporary agency as an account rep. manager's assistant as a temp).
This means all things held constant, at 35% (the low end), for each temp paid $10 an hour, the temp agency invoiced McGraw-Hill approximately $27.00 per hour per employee. The temp made $10 per hour before taxes.
At forty hours a week, one temp earning $10 an hour, the temp agency invoice to McGraw-Hill equaled approximately $1080 per week or approximately $4320 per month per employee. The one temp earned approximately $400 per week or approximately $1600 per month before taxes.
This means all things held constant, for 100 temporary employees paid $10 an hour and working 40 hours, McGraw-Hill's invoice from the temporary agency was approximately $108,000 per week or approximately $432,000 per month . The 100 temps collectively earned approximately 40,000 per week or 160,000 per month before taxes. This does not account for overtime. There were many who worked overtime. The first and second shifts collectively had anywhere from 300-600 scorer's, and third shift had around 100 scorers who earned a wage somewhere between $10 and $15 per hour ($17.20 for 3rd shift added differential).
The temps who worked in the "back of the house", they too, were from the same temporary agency. The "back of the house temps" scanned the test booklets, and sometimes lost, damaged, or missed placed them. I do not know what they made per hour for sure or the added differential, but the going rate for temps was approximately $8-10 per hour for similar type job responsibilities at this time (years 2003-2005).
"Back of the house" temps worked around the clock, but had shifts which overlapped with the scorer's. There were at least 200 total "back of the house" temporary employees in addition to the 300-700 temporary scorers. On a given day, 300-900 temps scored tests or scanned tests booklets, each temp earning approximately $300-$600 a week or $1200-$2400 a month before taxes (excluding 3rd shift added differentials bonuses etc). The temporary agency invoiced McGraw-Hill millions and millions, as the project lasted no less than five years.
The temporary employees were supervised by the temporary agency account representatives. I am not sure of the amount of those salaries, but I think the salaries were paid, in part by McGraw-Hill and in part by the temporary agency by means of contractual agreements. There were only three people on site from McGraw-Hill who were actual McGraw-Hill personnel, generally they worked the day shift.
Training
Once hired, there was a 1-3 day training and testing of the would be scorers over the rubrics and computer navigation. If the scorer did not pass the training test, they could try to do another training for a different subject and/or grade level if training and spaces in other subjects and grade levels were still available. If not, they could call back in in 4-6 weeks and try again when the next batch of tests were in queue. Training sessions held approximately 100-250 at a time in a large room full of cubicles with a computer, scrap paper, and miniature pencils. There were no microphones for trainers to use. Scorers' tests were graded by computer.
The Scoring Process
Scorers viewed and scored scanned student test booklets via the computers from cubicles located in a huge warehouse like space (picture yourself inside a Super Wal-Mart). Cubicles were organized by subject and grade level and the state from which the test booklets arrived. Much like data entry operators, using a rubric for reference, scorers entered into the computer a 1 for incorrect or 2 for correct responses. Some responses had a scoring choice of 1-3 or 4 data entry options (example: 4-correct, 3-mostly correct, 2-partially correct, 1-incorrect).
Typically, each scorer was responsible for meeting a quota of 400 scored short answer responses and a few multiple choice. Higher grades had longer essay responses, and the quota was "as close to 400 a day as possible". The bulk of the students' tests with the bubble fill in responses to multiple choice questions, we were told, were scanned and graded by computers in a different location off site and out of town .
There was one supervisor per subject per grade level for sometimes 50 or more scorers. During each shift, supervisors randomly checked a scorer's previously scored student test. The supervisor determined a scorer's error by whether or not they (the supervisor) would have scored the test response the same as did the scorer. If not, the supervisor recorded the scorer's error in the system, but the student's test was not re-scored.
Once the student's test was scored, changes could not be made even if it was incorrectly scored by the determination of the supervisor. The scorer error was put in the margin of error statistic for the scorer by the supervisor and recorded into a software system inside the supervisor's computer software. Where the scorers' errors were sent and how they were used from that point, I do not know. If a temp reached a certain benchmark of errors, they were let go until another round of training for a different set of tests were available. Scorer's were not shown the student test again as a point of reference as to how to not repeat the error, only told that their error was recorded and to "be more careful".
Test tablets where the writing was too light to scan were done by hand. This was difficult because many times the pencil writing was faded or smudged by the time it reached the scorer's hands. To this management said, "just do your best". The scanned unreadable booklets were scored by one set of scorer's who wrote the score on the inside page of the test booklet. A different set of scorer's entered the handwritten scores into the computer. We were told "this (system) saves time".
Because there was very little supervision and hundreds of temps, there were scorer's who came to work and clocked in, left the building, and came back to clock out at the end of the shift, never scoring one test for the day and received paychecks. Some of the scorer's would have cocktails at the bar (located in the same plaza as the grading site) during lunch or on break, and then return to scoring on the computer, others showed up "functionally" intoxicated. Some temps fell asleep while scoring or put their head down on the keyboard and went to sleep. Some of these people possibly graded a portion of your child's standardized test.
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
Conclusion
Standardized testing is like a cancer that attacks students, teachers, parents, and taxpayers. Standardized tests do not reflect students' intrinsic qualities such: as drive, effort, determination, or potential. Standardized tests do not take into account the language discourse between student and teacher or between the student and test, nor do they take into account students who experience testing phobias. Hence, offering only a partial and generally blurred snapshot of the total student academic profile.
As more standardized tests are going fully electronic, I venture to say standardized tests offered in this format will not take into account students who are less computer savvy due to either less accessibility to the necessary technological resources in the home, or because there is no explanation rendered to the student for how taking a test on a computer in a computer lab greatly differs from taking a test at a desk with pencil and paper inside the classroom.
There are some schools that may receive $1500-2000 per head (depending on the state and district), for students placed in remedial classes (this is not general common knowledge to many parents). Many students are placed in remedial classes according to score results of standardized tests. The more students placed in remedial classes, the more money for the school. There are thousands of students placed in classes where they do not belong while others continue to profit.
Testing generators like McGraw-Hill make small fortunes from the school districts and tax payers. The temporary agencies make a fortune from McGraw-Hill types and the temps they hire. Nationwide test prep services associated with standardized testing and improving performance make fortunes from the people they hire and parents who pay thousands for those services for their children. In effect, standardized testing is a cash cow that fuels profit circles at the expense of students, parents, and actual education itself.
As mentioned in http://www.dailykos.com/... there is talk now that Republicans want to make changes to standardized testing parameters. The tests, in part, are designed to weed out the chaff. As mentioned in the the diary cited above, there is concern the bar set past too high and is possibly an effort to show public schools fail as a push for more private schools. I suspect standardized testing is now adversely effecting Republican's homes (their self interest) which may also be a reason for Republican interest in modifying standardized testing parameters.
Comments are closed on this story.