In which a veteran of cultural studies seminars in the 1990s moves into academic administration and finds himself a married suburban father of two. Foucault, plus lawn care.
Why Good Student Course Evaluations Are So Hard to Find
An important tool, despite its many problems.
What are student course evaluations for?
If the answer to that were simple, it would be easier to design them. But student course evaluations -- and administrator’s observations, for that matter -- serve multiple purposes.
They serve as feedback for the professor. While much of what students write in evaluations is contaminated by one form or another of a halo effect, it’s occasionally possible to discern something useful. Sometimes they liked the class as a whole, but really disliked a particular assignment or reading. Maybe an exercise intended to show one thing was taken to show another. This kind of feedback is intended to be formative for the next semester or year.
They serve as safety valves for student opinions. It’s harder for students to complain that nobody cares what they think when they get asked directly, over and over again. Translating those opinions into observable action on a student’s timeframe is another issue, but they can’t say they weren’t asked. That matters.
They frequently play into promotion or tenure decisions. That’s probably untrue at research universities and only theoretically true at high-profile national colleges, but it’s typically true at community colleges with tenure systems. In this sense, they’re summative. They offer either confirmation of or counterevidence to professors’ claims of wonderfulness.
On the flip side, they can serve as ammunition for negative personnel decisions. When a dozen students from the same class write variations on “good professor when she bothers to show up,” that’s a red flag. Even the numerical part can be instructive. At a previous college, I received the numerical rankings every year. After a few years, I noticed that the same few names kept bringing up the rear, usually by a significant margin. If the same person scores multiple standard deviations below the mean year after year, well, I have some questions.
The tricky part is that what makes for constructive feedback may not make for useful fodder for promotion decisions. Formative assessments are great and humane, and I’m all for them when people are basically competent and actually trying. But when those conditions don’t hold, for whatever reason, formative assessments aren’t terribly helpful in making adverse decisions. If the decision gets challenged -- and it will -- you’ll need strong and unambiguous language condemning the performance. When one form has to serve both purposes, it’s little wonder that it does neither well.
Even the ‘theater’ function is flawed at best. The degree to which the opinions have an effect varies from case to case. For a full professor with tenure, the external impact of the difference between “above average” and “meh” is approximately zero. In that case, some student cynicism is hard to dismiss. In the case of someone coming up for tenure, or an adjunct hoping to make the leap to full-time, the same difference could matter. And the degree to which any given professor takes feedback and uses it constructively varies from person to person, even within ranks.
Clearly, student course evaluations shouldn’t be dispositive on their own. Students respond to cues both appropriate (clarity, respect) and inappropriate (attractiveness, accents). I read once that students tend to reward gender-conforming behavior: they prefer men who are authoritative and women who are nurturing. Women who are authoritative and men who are nurturing get downgraded. New and unexpected teaching approaches can be polarizing, so faculty who are not in a position to risk it may avoid innovation. And if years of blogging have taught me anything, it’s that the caliber of anonymous comments can be, uh, let’s go with “uneven.” Too much faith in any one source is a problem, even assuming that the source is clear and valid.
I’ve seen and heard proposals to do away with student course evaluations altogether, but I’ve never seen a convincing alternative. Theoretically, one could do pre- and post-tests to measure “value added,” instead of asking opinions, but how you’d get students to take pre-tests seriously isn’t clear. I’d also hate to see higher education repeat the mistakes of K-12. You could measure performance in subsequent courses, though in small programs or departments you’d run into an issue of circularity, and in any size department you’d have trouble controlling for inputs. Colleague and supervisor observations can help round out the picture, but they’re often so limited -- and even pre-announced -- that they come closer to measuring potential than performance. If I observe a class for a day, I may see a terrific discussion, but I won’t notice that the professor takes a month to return papers. The students are the only ones in a position to see that. Shutting off that source means giving up on some pretty important information.
Wise and worldly readers, have you seen a particularly good version of student course evaluations? Is there a reasonably elegant way to serve so many disparate purposes at once?
Read more by
Opinions on Inside Higher Ed
Inside Higher Ed’s Blog U
What Others Are Reading