Just as fall semester becomes a fading memory, an e-mail will pop up in your inbox. It will direct you to a Web link where you can view the collated results of student evaluations of your courses. Most seasoned professors will hit the delete button immediately; or they’ll click, briefly scan the data, and then delete. Not new hires! They’ll pore over the results as if a new Dead Sea scroll had been unearthed and the balance of all that is sacred rests in what they decode.
I understand. It used to be even worse; in the days before the god Electronica ruled the world, many schools published and distributed teaching evaluations on paper. I held my breath as I picked up the first of mine. The initial student comment read: “Rob Weir is the best professor I’ve ever had.” Whew! But wait; the second said: “Rob Weir is the worst professor I’ve ever had.” Never mind that my overall evaluations were glowing; I was crushed by the negative review. What did I do that made that student so bitter? Being a newbie, I also immediately thought that the only evaluator who got it right was the complainer and I had merely duped the other students. Inevitably, I would be exposed as a fraud and barred from college teaching. In short, I panicked.
What can we learn from course evaluations? Plenty, but first a few hurdles must be overcome. First, don’t emulate grizzled senior colleagues and treat them as irrelevant. You should listen to student evaluations. By all means, though, don’t do what I did and take every comment to heart. Stay in the profession long enough and you’ll soon learn that it’s impossible to please everyone. Even if your class featured naked fire-jugglers, at least one student would still complain it was “boring.” You’ll also learn that some complaints are simply reflexive. When have students not grumbled that the workload was too heavy? Or that some courses were scheduled too early in the day? And even if you held office hours 23 hours per day, someone would complain you were hard to reach. So where’s the balance?
Start by not taking official course evaluations any more (or less) seriously than your administration does. For the most part, administrators at community colleges and other tuition-driven institutions look at teaching evaluations much more closely than those at research institutions, many of which hardly look at them at all. Ask colleagues you trust where the chips fall in your college. If the entire process is a pro forma sham, reserve your angst for something more worthy.
If your college attaches at least some importance to student evaluations, put yours in proper perspective. Look first at the columns that record the institutional mean for each criterion being measured. If you’re close to or above that mean, you’re fine and can stop worrying. The way to improve your teaching is to note the areas where you scored above and below the mean. Next semester, do more of the first and less of the second! Among the lessons I learned from doing this was that most students appreciated my availability and respected my preparation, but too many felt that I didn’t integrate reading assignments closely enough into live-class lessons. That was an easy adjustment to make and my evaluations soon reflected that change.
Alas, the biggest thing you’ll learn from official evaluations is that you usually don’t learn all that much. First of all, the sample is seldom representative — it represents only the views of those who happened to be in class the day you handed out the evaluations. Inevitably, your best students will come down with debilitating senioritis that day, and some of the worst will have just recovered from it. (It’s amazing how the intellectually halt and lame enjoy health and attendance resurgences at the end of the semester!) Only colleges that require all students to fill out evaluations — some impose fines on those who don’t — can be said to be comprehensive.
Even if you get a 100 percent return, however, official evaluations are inherently flawed. Too much of what they purport to evaluate applies quantitative measures to qualitative experiences. I know that many colleagues disagree with me on this score, and some whom I admire greatly have labored hard on creating evaluative tools, but I simply don’t believe it’s possible to quantify how professors have nurtured things such as abstract thought, intellectual maturity, curiosity, elegance of expression, creativity, or zeal for learning.
I find that there are more useful forms of feedback and that these eventually will be reflected on the official bubble sheets. One of the best ways to know how you’re doing is to ask students. Very few will have the moxie to slam you in person, but you can ask open-ended questions that will yield important information. (In small classes you can do this orally; in large ones you may wish to have a short in-class exercise.) I ask questions such as: What assignment did you like best this semester and why? Which did you like least and why? What was the very best thing about this course and what made it so? What do you wish we had done more of? What’s the thing you’ll remember most about this class? If I had to drop one thing from the class, what should it be and why?
Be prepared to be astonished at the answers. You may find that the book or article you found most fascinating is dubbed the most useless. (Some students will admit they gave up on it.) If students tell you that the biggest thing they’ll take away from the course is your personality, jokes, or antics, revise for next semester. You’re here to shape minds, not build fan clubs.
Some student feedback can be distressingly revelatory. I had been teaching for three years when I did something I had not done before or since: I left my lesson folder at home. I only discovered this an hour before class and spent a frantic 60 minutes assembling a makeshift plan. That panic attack led to end-of-the-semester soul-searching when many students cited that day’s lesson as their favorite. Upon reflection I realized that I had stripped the lesson to its basics. By necessity it was sparse, but it was also clear and less claustrophobic than my usual detail-choked lectures. That feedback led me to pare details from lectures in favor of leaving space for narrative development, analogies, and student discussion. My official ratings shot up.
Other things I’ve learned directly from students: They love it when I confess I don’t know something that they do. They like it when I ask them to think through a problem with me rather than simply telling them the answer, but they get annoyed if I prolong the process. They enjoy it when I redirect questions and involve lots of people in the discussion. Students get animated when I relate course materials to things in their world (films, music, university issues). They turn off if I’m too critical of their work. The latter was an important lesson. Like many scholars fresh from grad school, I found it easier to critique than to affirm. Students taught me to use praise as prelude to criticism. If all of these things strike you as merely good pedagogy, I’d agree. But I’ll humbly admit they didn’t always seem that way.
The other evaluation you should pay attention to is self-evaluation. Learn to trust your instincts. We can learn a lot from others, but official evaluations are sometimes kinder than they should be. If you muse on your semester, you’ll easily recall the things that worked well. Repeat next semester. You’ll also remember what bombed. Jettison these. Flops, alas, occur throughout one’s career. This fall, I organized a writing course around the theme of preparing to become a public intellectual. It seemed like a good idea, but it wasn’t. It will not be the theme of next semester’s class. It didn’t even last the semester. I did a mid-course correction because students let me know my plan wasn’t working.
My conclusion is a simple one: no matter where you are on the career path, evaluators can help you refocus. Take good feedback to heart; just don’t let bad feedback break it.