Empowering Students Through Instructor Evaluations

We need to teach students how to assess their professors without bias, writes Bryan A. Banks, who asks students to devise their own rubrics for evaluating his teaching.

April 28, 2022
VectorStory/istock/getty images plus

A colleague of mine once quipped that RateMyProfessor.com should be called HateMyProfessor.com, underscoring the idea that students who fill out evaluations often come to them to be overly critical of their professors. Rarely do students complete voluntarily evaluations with lukewarm opinions of their professors. What feedback they do leave is frequently tinged by sexism, racism, ageism and/or personal vendettas.

How can faculty make the greatest use of student advice to hone their craft if the feedback they receive borders on hate speech or is so celebratory that it boosts egos? Both approaches can undercut the desire to think critically about one’s pedagogy.

If colleges and universities want to gauge student perspectives on educators, students need to help create those evaluations. At the same time, we need to make sure students are incorporated into the process in a way that forestalls their explicit or unrecognized prejudices. In short, they need to be taught how to evaluate. How to evaluate one’s teachers is not self-evident, and not all educators are judged equally. So how can we do this?

I’ve developed an exercise, born out of frustration and revised through successive attempts, to achieve this goal. I have students devise their own rubrics for my classes—rubrics they get to apply to my teaching.

At the beginning of the exercise, I have students complete a standard teacher evaluation form. “What are your professor’s strengthens and weaknesses? What would you change about the course, and what would you keep?” After completing that part, I have them tuck the evaluations away.

Next, they get into small groups and devise a rubric. Some come up with a simplistic three- or four-model rubric, while others draft longer rubrics with a dozen or more categories. Content covered. Delivery mode. Transparency of expectations, especially on assignments. Level of compassion for students (or lack thereof). These inevitably make it onto the list, but other more problematic categories also weed their way into the mix. Professional attire. Professorial tone. Categories that inevitably reinforce their biased perception of a college professor. In their small groups, the students discuss at a preliminary level what the most important factors should be and then assign points to each category.

This part of the exercise gets the students to think about what makes their teacher, or any teacher, effective. It is also a part of the process where students get the opportunity to think critically about their presuppositions. Is “professional attire” as important as mode of delivery? What does a student even mean by “professorial tone”? In their groups, they begin to hash those issues out in the weighting process, which decenters the discussions from the instructor personally to more general questions related to teaching efficacy. So rather than pulling out my soapbox to discuss the evaluation bias that my colleagues who are not cisgender, heterosexual, white males like myself face, the students broach these discussions themselves. At least they do in my history classes, but colleagues who have run this exercise in other departments have noted the same shift in discussions, as well.

After their groups have compiled and crafted their weighted-rubric teaching evaluation, they share it with the rest of the class. They explain their points. They draw from their own experiences in one class or another. They describe the eccentric professor who wore jean shorts and a Hawaiian T-shirt to class but managed to engage students on a personal level while introducing them to Proust. Or they describe their chemistry professor whose “anal-retentiveness,” while off-putting to some students, modeled the controlled behavior the lab needed to function safely. Each group presents their rubric and compares what they devised with the groups that went before them.

Then they decide which rubric is “best.” Which is too long? Which is too short? But, most important, which reflects the balance that students care about the most? Spoiler alert—in all the time that I’ve been using this exercise in the classroom, the students care about compassion and communication skills more than rigor. They care about creativity, experiential learning and transparency. They want to see their instructors as people learning through the process of teaching.

Related Stories

Finally, they fill out the rubric they collectively deemed best as they reflect on my teaching. I keep this stage anonymous, because I want the feedback to be honest, but before the exercise is over and before I collect their feedback, the students pull out the pretest evaluation and compare their student-created evaluation with the generic one. How did their notes compare and how did they change? I normally leave the room for that discussion.

For all of higher education’s talk of student-centered approaches to teaching, of high-impact practices and of the need to reflect on lessons learned in the classroom to optimize the learning process, we seemed to have missed a step. For all the critiques of student evaluations of teaching, we still seem to treat students as consumers, getting them to rate their instructors as if they were rating a yoga class or Yeti mug. Shouldn’t the evaluation process by which students give us feedback be partially devised by them? Shouldn’t the reviews reflect the critical thinking skills they’ve developed in the class they’re evaluating?


Bryan A. Banks (@BryanBanksPhD) is assistant director of the Experiential Learning and Career Design Center and an assistant professor of history at Columbus State University.  He would like to thank his students who workshopped this exercise and provided feedback, which challenged his approach to teaching.


We have retired comments and introduced Letters to the Editor. Letters may be sent to [email protected].

Read the Letters to the Editor  »

Back to Top