Skip to main content

The semester is over; the school year is complete.  Two weeks ago I watched our students walk across a stage to get a diploma case (the actual diploma is mailed later).  The week after that I spent in a rather chilly computer lab reading portfolios of work submitted by graduating students.  That inspires this afternoon's diary topic:  How do we know if we are doing a good job or not when we teach our students?

This is a very important issue in modern education.  The politicians who want us to assess our skills as teachers based on high stakes standardized testing at the end of the year have one solution.  Clearly there are valid concerns about that.  For starters, the input is so variable that comparing, say, a suburban middle class school in St. Louis County with an urban school in a decaying lower income neighborhood (including general health of the students coming into the day – have they had enough to eat over the weekend? Have they been able to sleep the night previously or has it been too loud for them to get effective rest?) makes an apples-to-apples comparison impossible (even apples to oranges is not a valid description; perhaps we should go with apples to pine nuts – same general category of vegetables and from the same general environment, but from a completely different type of plant, and [oops – my natural history summer reading list is showing]).  

But the desire to have some accountability is understandable.  I don't want to have my students go through four years of university without having learned anything (in terms of skills, if not content), and I don't want them to come from high school without a certain basic level of skill development as well.  As a university with a more or less highly-selective admissions policy, I am lucky that there is some evaluation of student skills coming in, but I don't want college education to be just for those middle-class students from the suburbs who have had all the advantages their parents' status and income (and previous education) have given them.  And I want to be able to know that students have gained from their experiences here and in earlier educational experiences.  This isn't assessment of outcomes, exclusively, but that is indeed a part of it.

Come with me to the orange wonderland past the creamsicle highway...

I teach in a university where we ask our students to do evaluations at the end of every semester.  In my discipline, we have developed our own standardized questions, and the answers provided after the students have filled in bubble sheets (or now, have answered the survey on a computer) are in the form of numerical results averaged from responses from a whole class.  Asked are questions such as "Did the teacher provide prompt and meaningful feedback?" and "Was the professor accessible during posted office hours?" and such things as "Were the requirements clear?" "Did the tests evaluate material covered in the course?" and "Provide an overall evaluation of the instructor."  we used to hand these out in class and leave, but now all students in a class are sent a link to fill out the evaluation.  There was a certain joy in knowing that if a student did not come regularly to class, he or she would not be around to fill out the evaluation, but now that the students enrolled are sent evaluations, anyone who has given up on the class or the prof can use them to rant (sometimes this is completely valid, but it is a depressing prospect at times, as well).  If we want to add questions or have things we want or have answered in more detailed feedback we are welcome to add them to the survey or to hand out questions in class for students to collect among themselves and stuff into an envelope delivered to the department chair and returned to us after all the grades are in.  I have use this latter process off and on for feedback on course organization, textbook selection, and other similar concrete things (I have changed textbooks for the fall based on the students' feedback from a class taught on an every-other-year schedule, for example).  

When I started, my dean (that was not his title, but it was effectively his position) would go through the evaluations with every faculty member every year and tell us how horrible out evaluations were, and how a scale of 5 with a rating of 3 was impossibly bad.  We should be looking at getting another job, either teaching in a research university or attempting to get a job in another field altogether.  You see, if you were not an excellent teacher coming in, it was hopeless, as one could not learn to teach well.  It was news to the people who were working to mentor newer faculty.  They reassured me that there were things I would be able to do to improve my abilities, and that guy was simply wrong.  (It was amusing to have him finally facing having to be evaluated himself and doing the things we were never allowed to do – he did it when we had a required meeting and he stayed in the room, and told us if we thought he was average, we should mark a 4 rather than a 3, and really he did a good job so he should be given 5s if we agreed that he did anything well…)

And by the way, I have seen an improvement in my evaluations since my first two years of teaching.  

So standardized testing, standardized questionnaires, and feedback surveys are forms I have used or experienced (even though I was in elementary school in the 1960s, we wrote bubble tests as a pilot program, every two years -- it was the Iowa assessment or Iowa test, or Ames testing -- I forget the name, but I was experienced with bubble sheet assessments from a very young age).  I have also, on rare occasions (when I want to beat up myself!), have gone to such sites as "Rate my Professor" which is depressing in some ways and comforting in others.  While the last time I looked at my own ratings, there were some who commented on my appearance and "bitterness" the majority commented that I was really hard, but when I said that students should come and talk to me if they needed assistance, I meant it and I would be willing to be really helpful if only students come and asked for help if they needed it.  That was about as good as I figure I will ever get from a site like that, and so I have worked very hard not to give into the temptation to return (after all, these evaluations really are designed for students, not for faculty, or that is what I tell myself).

There is also the anecdotal evidence -- the students who come by to see you the next year just to say hello, or to tell you that your class is intersecting with the one they are in this semester and they wanted you to know -- the comments one gets on Facebook after a student has been out for several years -- or the portfolio submissions of a paper written for my class as "Most Satisfying Personal Experience." The latter tends to show up as "this was the hardest I worked on paper in my time here and it showed I was able to earn an A" narratives.  The fact that they were happy with their success makes me happy as well.  The letters you get years after a student has graduated -- "I was in this museum and saw a piece I remembered from your class" or "You are right!  This building IS really something special!" are successes in my book as well.  

Then there are the students who graduate who might not have done so, because you helped them through a particularly difficult period, or  helped them figure out how to succeed in a particularly challenging class.  There are those who apply to graduate schools or for jobs, and use you as a reference because they were proud of how they did in your classes, and they know that you knew them and cared about how they did -- they were not just nameless faces in a classroom.  Then there are those who succeed in graduate school or at a job because you have taught them how to write, how to research, how to build a persuasive argument, how to manage a complicated set of data and explain it to a person who does not understand statistical analysis with any depth.    

And there are awards such as "Educator of the Year" which sometimes you are fortunate enough to be nominated for and occasionally for which you might be selected.  For me that is very rare but very much appreciated. Very.

In other words, the success you have as a teacher is measured in a wide variety of ways, from evaluations of your students to anecdotal communications.  Your failures as well (something I have written about in this series, too) can show up in many different contexts.  Each has its value, and each has its meaning.  As you head into your summer, may you have good positive feedback from your students at every turn.

When does your semester/school year end? What are your summer plans?  Will you be teaching again in the fall?

And congratulations to our professors emeriti!  Ojibwa and rserven will be teaching us but not standing in front of a classroom this next year.  Congratulations on your retirement, both of you!

UPDATE: Thanks to rserven, I corrected the Latin grammar.  Ooopsie.

Originally posted to annetteboardman on Sat May 24, 2014 at 12:30 PM PDT.

Also republished by Teachers Lounge and Community Spotlight.

EMAIL TO A FRIEND X
Your Email has been sent.
You must add at least one tag to this diary before publishing it.

Add keywords that describe this diary. Separate multiple keywords with commas.
Tagging tips - Search For Tags - Browse For Tags

?

More Tagging tips:

A tag is a way to search for this diary. If someone is searching for "Barack Obama," is this a diary they'd be trying to find?

Use a person's full name, without any title. Senator Obama may become President Obama, and Michelle Obama might run for office.

If your diary covers an election or elected official, use election tags, which are generally the state abbreviation followed by the office. CA-01 is the first district House seat. CA-Sen covers both senate races. NY-GOV covers the New York governor's race.

Tags do not compound: that is, "education reform" is a completely different tag from "education". A tag like "reform" alone is probably not meaningful.

Consider if one or more of these tags fits your diary: Civil Rights, Community, Congress, Culture, Economy, Education, Elections, Energy, Environment, Health Care, International, Labor, Law, Media, Meta, National Security, Science, Transportation, or White House. If your diary is specific to a state, consider adding the state (California, Texas, etc). Keep in mind, though, that there are many wonderful and important diaries that don't fit in any of these tags. Don't worry if yours doesn't.

You can add a private note to this diary when hotlisting it:
Are you sure you want to remove this diary from your hotlist?
Are you sure you want to remove your recommendation? You can only recommend a diary once, so you will not be able to re-recommend it afterwards.
Rescue this diary, and add a note:
Are you sure you want to remove this diary from Rescue?
Choose where to republish this diary. The diary will be added to the queue for that group. Publish it from the queue to make it appear.

You must be a member of a group to use this feature.

Add a quick update to your diary without changing the diary itself:
Are you sure you want to remove this diary?
(The diary will be removed from the site and returned to your drafts for further editing.)
(The diary will be removed.)
Are you sure you want to save these changes to the published diary?

Comment Preferences

  •  Tip Jar (15+ / 0-)

    I hope you all have a wonderful summer!  I will be around but this diary series will be on hold until we get back on campus in August.  I look forward to reading your stuff, though, both diaries and comments.  

    Have at it!

  •  Thank you for the shout out. (6+ / 0-)

    Correct phrase would be "professors emeriti."  < /pedant >

  •  of course the question is: class or curriculum (3+ / 0-)

    and not paper or plastic, not on-ground or online, since assessment suffers as we have seen in the k12 realm as a flawed quantitative methodology used more politically than pedagogically especially as scale is amplified

    Warning - some snark may be above‽ (-9.50; -7.03)‽ eState4Column5©2013 "If we appear to seek the unattainable, then let it be known that we do so to avoid the unimaginable." (@eState4Column5)

    by annieli on Sat May 24, 2014 at 01:21:27 PM PDT

  •  I think that the answer is... (0+ / 0-)

    ...simpler than we want to believe.

    A) Test the kids at the end of the term.
    B) Subtract the score from the beginning of the term.
    C) Make adjustments for factors that are known to have quantitative impacts on test scores, such as:
       1) Class size
       2) Socioeconomic characteristics of the kids
       3) Socioeconomic characteristics of the community

    How effective was the school? Just add up the numbers.

    This works for math, reading, science, grammar, and even history.  It doesn't work well for music or art. (But nothing short of submitting portfolios to panels of experts works for music and art!)

    True, there are evil people who want to use testing schemes for political purposes. But there are also regular parents and taxpayers who need solid reassurance that the lives of their kids and the dollars from their taxes are being well-spent.

    If teachers don't help develop good quantitative & standardized measures, they will be forever at the mercy of bad quantitative & standardized measures.

    Some will complain that they don't like "high-stakes" tests. Well, the education of a child is very important. My child has exactly one chance at 2nd Grade, so the "stakes" for her are incredibly high! If she is not learning, she will lose a year that she can never get back.

    Those who do not like "high-stakes" situations should not have chosen to teach. They should have chosen a low-stakes career where the outcomes are less important, such as advertising, poetry, or designing t-shirts about bacon. Unlike these professions, Education is very, very, "high-stakes"...!  

    •  Another shout-out for VAM. Haven't you heard? (3+ / 0-)
      Recommended by:
      OregonOak, gffish, stretchslr53

      VAM doesn't work. It measures teacher-added value about as well as measuring change in student height.

      But you go on cheerleading. It's most interesting.

      •  I don't understand... (0+ / 0-)

        ...why you linked that study. It doesn't say anything about VAM.

        The study just says that "what teachers claimed to teach when we surveyed them" is only weakly correlated with "what students actually learned when we tested them".

        (I have seen this done before. An anti-reformer links a bunch of random studies that contain relevant keywords in their abstracts. Then, when I actually read those studies, I find that they don't prove the case, or that they sometimes prove the opposite!)

        Look: Teachers have always complained that low test scores aren't their fault. They say that the socioeconomic situation of the student is a bigger influence than anything they can do in the classroom. Studies tend to support this.

        Well, fine. Let's adjust the scores for socioeconomic factors. Now there is nothing left to complain about. Or so a rational person would think...

        •  Here's the abstract: (0+ / 0-)
          Recent years have seen the convergence of two major policy streams in U.S. K–12 education: standards/accountability and teacher quality reforms. Work in these areas has led to the creation of multiple measures of teacher quality, including measures of their instructional alignment to standards/assessments, observational and student survey measures of pedagogical quality, and measures of teachers’ contributions to student test scores. This article is the first to explore the extent to which teachers’ instructional alignment is associated with their contributions to student learning and their effectiveness on new composite evaluation measures using data from the Bill & Melinda Gates Foundation’s Measures of Effective Teaching study. Finding surprisingly weak associations, we discuss potential research and policy implications for both streams of policy.
          Your "simple answer" is nothing more than VAM, which is nothing more than an attempt to measure by testing the effect the teacher has on a student, which is exactly what this study looks at.

          Your failure to understand that is not anyone else's problem. Likewise, just "adjusting" for socioeconomic factors throws out the largest percentage, by far, of all factors that affect student learning. What is left isn't going to tell you anything that any competent administrator can see directly, without having to resort to expensive high-stakes testing that has not shown any reliability whatsoever.

          From the American Statistical Association statement linked above:

          Most VAM studies find that teachers account for about 1% to 14% of the variability in test scores, and that the majority of opportunities for quality improvement are found in the system-level conditions.
          The VAM scores themselves have large standard errors, even when calculated using several years of data. These large standard errors make rankings unstable, even under the best scenarios for modeling.
          So, a "rational person" might just think that VAM is pretty useless, and that "adjust(ing) the scores for socioeconomic factors" doesn't leave any information worth analyzing. Right?
          •  Correction: (0+ / 0-)
            What is left isn't going to tell you anything that any competent administrator can't see directly, without having to resort to expensive high-stakes testing that has not shown any reliability whatsoever.
            Silly me.
            •  And we have found the problem! (0+ / 0-)

              Maybe the Competent Administrator can evaluate the kids. Maybe he can't.

              Or, maybe the Competent Administrator will lie. Maybe this person will claim the kids are learning because that keeps his job safe and the property taxes flowing. One good reason for standardized testing is to prevent these lies.

              Huge numbers of kids are awarded high school diplomas, yet are not prepared for college work. Clearly, those diplomas are lies. Yet the Entrenched Educational Establishment appears unable to stop itself from continuing to lie.

              We should not be surprised. When we let Bankers, Defense Contractors, Insurers, or Defense Contractors "self-regulate", they lie. They always do.

              Why do we suppose that the Entrenched Educational Establishment is somehow immune to human nature? Why should we believe their assurances that all is well? I say, Trust But Verify.

              Let's test and then we will see if the Competent Administrator is lying to us.

              •  Completely missing the point. (0+ / 0-)

                Administrators evaluate teachers.

                When the livelihood of teachers is threatened by a series of high-risk tests that are not statistically valid (or even particularly valid, as in the case of many teachers who are evaluated on the progress of students they don't even teach, or the progress of students in subjects they don't teach!), then what will result is cheating.

                The very tests you are pushing for are guaranteed (and have, if you have kept up on your reading) to create institutionalized cheating, both by faculty and administrators.

                Why do you keep pushing for something that produces these results?

                And, yet again, you also completely ignore the proffered evidence that refutes your claims. Not new, but worth noting. Again.

          •  Just the opposite. (0+ / 0-)
            "Most VAM studies find that teachers account for about 1% to 14% of the variability in test scores".
            Yes. So most of the learning is beyond the teacher's control. I don't think we disagree here.

            What VAM does is take the 1%-14% and measure what the teacher did. Not what the parents/society/environment did.

            Since you are only responsible for 1%-14% of the outcome, VAM only evaluates you on that part. Why is this not fair?

            VAM also shuts down the many of the "no excuses" charter cheerleaders who claim that "all kids can succeed" and love to put 100% of the blame on teachers for any failures. (I support charters, but these guys are unrealistic and possibly politically motivated).

            All kids can't succeed. Kids in bad home situations will do worse. Any evaluation of teachers needs to respect that reality and not penalize good teachers for bad parenting or a bad local economy. That is the goal of VAM.

            •  You seriously think any evaluation that is based (0+ / 0-)

              on as little as 1% of the factors that influence a student's progress is valid enough to end a teachers career?

              Seriously?

              Why do you think you have any credibility whatsoever?

              Certainly supporting charters doesn't help your cause, regardless of your attempted differentiation between "good" charters and "bad" charters. In the current climate, there is no way to promote one without promoting the other, which you ought to be able to recognize.

              And the "goal" of VAM is immaterial, when it is so completely insufficient to the task, as all recent research has been indicating.

    •  VAMMAN! Haven't seen you in a while! (2+ / 0-)
      Recommended by:
      gffish, stretchslr53

      But, unfortunately, your formula for VAM is considerably less sophisticated than the real one, and the real one doesn't show anything significant.

      It does show which students come from families with no one in college in the home. But you can do that by just asking them, and you dont get to judge their teacher for that.

      If you want to see how your child is doing compared to others in her grade level, test her. You can do that. There are many instruments online and many professionals charge for that. If you are unhappy with progress at say, Semester or Quarter, move her. You do have that choice now.

      If you implement your suggestions nationwide, you will skew the results toward only the items you want to test. You cannot test for the things which are important about school, being responsible, being aware of others social needs and desires, and how well you treat the new kid who cannot speak English very well. Give me THAT kid in my workplace ANY day over the technically perfect. I can use that skill.

      Figures don't lie, but liars do figure-Mark Twain

      by OregonOak on Sat May 24, 2014 at 04:03:54 PM PDT

      [ Parent ]

      •  Of course I get to judge the teacher. (0+ / 0-)

        If I have two teachers and, caeteris paribus, one has kids who improve more,  that teacher is probably better.

        But if one teacher has a bunch of kids whose parents have low educational attainment, then shouldn't that be a factor in their evaluations?  If not, why not?

        You write:

        "You cannot test for the things which are important about school, being responsible, being aware of others social needs and desires, and how well you treat the new kid who cannot speak English very well."
        Are you telling me that if we teach more math the kids will become less caring and more cruel? I find it hard to believe that learning more grammar or reading will make kids mean.
        •  Gross Reductionism.. you are a perfect example (0+ / 0-)

          of people who only look at dissected frogs because you cannot measure the behavior of live ones. You prefer dead ones because you can assign numbers to their attributes, but in the end, you have a dead frog, my friend.

          Figures don't lie, but liars do figure-Mark Twain

          by OregonOak on Sun May 25, 2014 at 10:16:30 AM PDT

          [ Parent ]

  •  My students are not qualified to evaluate me (2+ / 0-)
    Recommended by:
    annetteboardman, ManhattanMan

    I teach mathematics at a rural community college in California.  We administer evaluations (via email) every quarter.  While the information students provide is interesting and may provide some insight into my teaching, the fact remains that my students are not qualified to evaluate my effectiveness in any way more meaningful than a popularity contest.

    I find peer evaluation much more valuable, although this is hard to come by as our faculty do not have, or make, the time to observe one another.

    As for standardization and test-based measurement of student learning, I have experience teaching in British-style school systems which teach to an exam.  Students demand that their teachers stick solely to the syllabus, a disaster in any inquiry-based, Socratic learning environment.  And it shows in the results - students are knowledgeable, but uneducated.  They possess rote knowledge, but little understanding.  Faced with a new set of circumstances such students are helpless.  Thus to evaluate real effectiveness on the part of a teacher use of test scores is misguided, and simply lazy.

    By virtue of having once been a student everyone considers themselves an expert in the field of education.  I've been to the dentist, but claim no particular expertise in dentistry.  Such is the disdain with which our profession is viewed.

    •  You're absolutely right. It's amazing how many (0+ / 0-)

      people who are essentially number-pushers think that everything can be treated by looking at numbers.

      And then they get all activist, donating money (Zuckerberg, Gates, Walton, et al) and propaganda to promote their faulty premises, indifferent to the chaos and damage their efforts wreak.

      It's Very Annoying.

    •  I would like to challenge... (0+ / 0-)

      ...one of your statements.

      "...And it shows in the results - students are knowledgeable, but uneducated.  They possess rote knowledge, but little understanding."
      Do you have any empirical evidence of this? Has any study or research shown that there are large numbers of kids with high test scores but little "understanding" of the material?

      I have never seen such evidence.

Subscribe or Donate to support Daily Kos.

Click here for the mobile view of the site