New trends are constantly sweeping across the education field in America. Unfortunately, they often amount to little more than buzzwords and jargon that get thrown around in training sessions: things like “student centered education,” “Creating life-long learners,” “Engaging critical thinking skills,” and the newest - “Data driven instruction.”
While these jargony phrases all have reams of studies “proving” their effectiveness and often really do have good ideas and practices at their core, just as often they are empty catch-phrases that just give new names to stuff we've already been doing.
But data-driven education is different. It seems more pervasive than previous trends I've seen and, while it sounds like a good idea in theory, it is destructive to real learning when put in practice in the classroom.
To boil it down, what data-driven instruction means is that teachers should use actual data from student testing to drive their classroom instruction. It is more scientific and less subjective. You look at test results to see what specific skills and knowledge the students did well in and which they have not mastered, and use that data to plan upcoming lessons.
So what's wrong with that? Three main things. First, actual teaching time is drastically reduced because of the almost constant testing. Second, It Does Not Work. Third, it wastes teacher time that could be better spent in other ways.
Let me take them one at a time. Every week at my school, every teacher in the 4 core subjects, (math, English, science and social studies) gives a pre-test, usually 10 questions covering the TEKS (Texas Essential Knowledge and Skills – the specific things the students should be able to do or know that will be tested in the state standardized tests at the end of the year) that will be taught that week. At the end of the week, they get another 5 to 10 question test to see if they mastered the material.
Those quizzes generally take 10 -15 minutes each, so 20 to 30 minutes a week – about 10 percent of our instruction time, or over the course of the year about 15 to 18 hours of testing. They are in addition to any quizzes or unit tests we normally give. Then, every 6 to 9 weeks, we have a week of benchmark testing, which seeks to simulate the end of year exam, so the students are given 4 hours, of 4 subjects, or 16 hours of testing every six weeks – or another 96 hours a school year of testing. Taken together, they amount to roughly 110 hours of testing per school year.
I've found that I can either use my time teaching kids, or testing kids. They don't learn by taking a test. The amount of instructional time lost to this increase in testing is ridiculous. It would not be a bad use of time, if it was done for a good reason, but it is not.
Which brings me to point number 2. It doesn't work.
Let's take 8th grade social studies for example. I could just as easily pick any grade level or subject though. There are roughly 110 TEKS in 8th grade social studies. So even those 4 hour benchmark tests only include one or two questions each on less than half the TEKS. And some of the TEKS cover an enormous amount of material. For example, one TEKS is “The student will understand important dates in U.S. History.” There may be one multiple choice question on that TEKS on a benchmark and if most students get it right, well, our data has shown us the students have mastered that TEKS and we don't want to waste time teaching it for the next six weeks. But if most students miss that one question, well our data tells us they haven't mastered that TEK and we have to reteach it. So it may be an easy question – like what famous document was adopted by the Second Continental Congress in 1776, or it may be something more obscure like the year of the Battle of New Orleans. The point is, we are supposed to make a huge generalization based on how students answered a single multiple choice question.
You might think, but as the teacher in the classroom with the students every day, don't you have a better idea of what lessons the students “got” and which ones they didn't? Of course I do, but that would be “subjective” and data-driven education dictates that we use the objective data – the single multiple choice question, and not what we observed with our own eyes spending 36 hours in the classroom with the kids the last six weeks.
Which brings me to point 3, the enormous amount of time wasted analyzing this incredibly sketchy, meaningless, data.
For a 60 question benchmark test, given to 7 classes, perhaps 160 kids. The teacher then has to break down how each class did on every question and which TEKS that question tested, calculate the percentage of students that answered the question correctly and then create a re-teach calendar for the next month on which TEKS need to be re-taught and which day that will happen. It takes hours and hours. But that's only the beginning. Then we have to analyze each of those 140 students' tests individually, analyze which TEKS each student performed poorly on and create a tutoring schedule individualized for that student. And remember, there are so many TEKS, many were not tested on that benchmark, and the ones that were tested, were only one or two multiple choice questions.
So, when you hear a school principal start talking about data driven instruction, it's not just some harmless jargon, it's a way of collecting sketchy, mostly meaningless data, chewing up hours of teachers' time that could be used to actually plan better lessons, testing students over and over, until they are sick of it and treat the tests like a joke, and deciding what needs to be taught based on one or two multiple choice questions instead of relying on teachers' ability to observe students they've spent dozens and dozens of hours with in the classroom.
Worse, it forces teachers to “teach to the test,” then entire school year becomes about preparing students to pass the multiple-choice test given by the state at the end of year, not about truly learning, just about how to answer those test questions.