You have /5 articles left.
Sign up for a free account or log in.
Student evaluations are biased. Research provides ample evidence of those biases, as noted in Victor Ray’s recent summary of the contentious debates about using student evaluations to assess teaching effectiveness. These biases overwhelmingly target faculty of color and women. With the end of term fast approaching, and faculty receiving another round of student evaluations, we need to combat the continual (mis)use of standard, survey-based classroom evaluations in faculty reviews. In this essay, I provide some tools to begin challenging how we consider student evaluations in light of the biases found in the research literature.
Let us consider a hypothetical situation facing a black woman -- let’s call her Sandra -- who was denied promotion to full professor at her public regional university. Sandra teaches controversial courses on race and gender that challenge students’ preconceived notions. These courses include conversations about social inequalities and power that can be contentious as students grapple with empirical evidence challenging long-held assumptions. Supported by departmental and college committees, administrators denied the promotion as a result of low teaching evaluations. Those administrators cited evaluation measures of student-faculty interactions as their concern.
Building on this hypothetical situation, I will describe how course content and perceptions of faculty members can influence evaluations. The following steps are meant to help give faculty additional tools to discuss issues with their evaluations during reviews. (I should also note that other factors that I won’t focus on here can influence the interpretations of teaching evaluations, such as low response rates on online evaluations compared to paper administration and even system errors that allow more than one student response.)
Step 1: Be critical of evaluation benchmarks and their application, particularly when you teach courses on race, class, gender and sexuality. Not all institutions have their own student evaluation instruments. Many use testing companies like the Educational Testing Service to conduct teaching evaluations. Thousands of students complete ETS evaluations each year, and the company holds mounds of data to establish benchmarks to compare future teaching evaluations.
These benchmarks often aggregate data across courses in a discipline. In sociology, however, those courses can focus on an array of subjects including research methods and statistics, race, class and gender, religion, education, law, marriage and family, globalization, and criminology. We should expect that different course content will result in variations in student evaluations -- for example, when comparing an introduction to sociology course to a more controversial course on racism or sexism.
Thus, if you teach courses with more difficult content because of the topic, group your evaluations by the course type, such as Introduction to Sociology versus Race and Racism. That comparison can provide evidence of how student biases may be course based. Also, having those comparisons available at a departmental level can identify disparities between faculty members who teach the same courses.
Step 2: Apply a possible “bias range” to your evaluation scores. The literature examining student biases on teaching evaluations can provide a useful bias range to apply to evaluation scores. Women faculty are penalized 0.4 points and black faculty are penalized approximately 0.6 points on their ratings by students. Those penalties suggest that, cumulatively, faculty members could be penalized from 0.4 to one point. On a typical one-to-five scale for an evaluation item, a 0.4 difference, let alone a one-point difference, is substantial enough to question how committees use these evaluations. Such biases can be amplified when coupled with the possible differences between course types. Although research has not uncovered how student bias impacts LGBTQ faculty evaluations, it is not hard to imagine that similar penalties impact the evaluation scores of those faculty members, too.
Step 3: If administrators focus on possible low student-faculty interaction scores, present the research. Student biases often reflect racism and sexism in academe. As a recent AAUP survey of faculty found, student evaluations are becoming more abusive and bullying, with responses that include inappropriate and discriminatory language, similar to anonymous online comments. Research indicates students evaluate scholars of color as less competent and even critique how a faculty member dresses.
A recent study calls into question how much students recall about student-faculty interactions during a course. Student-faculty interaction responses can hide behind the oft-used “personality differences” that belie the gendered racism that women faculty of color often face from white male students. Other research found women are rated significantly lower than men on how "helpful" they are to students. That could directly relate to students’ stereotypes of women as more caring or as caregivers in society. Therefore, a double-edged sword exists for women faculty because, regardless of how caring they are toward students, those stereotypes may persist and subsequently impact their student evaluations. Further, we must think about how to interpret student feedback -- both quantitative and qualitative -- using an intersectional lens. That suggests, for example, that controlling images of black women in faculty positions may influence how students evaluate them in the classroom. All of these biases effectively devalue faculty of color, particularly women, and influence their classroom management and student interactions.
These three steps are only a start for combating student biases in teaching evaluations and limiting the misuse of these evaluations in annual review, promotion and tenure-review processes. Contextual reviews such as peer evaluations are more informative of faculty teaching effectiveness. In fact, despite the persistent use of teaching evaluations in faculty reviews, the scholarship raises questions whether these evaluations should even be used at all.
Finally, if student evaluations of teaching are continually used, a more critical, holistic approach to these evaluations should not fall on the shoulders of faculty members alone. Rather, it should become incorporated into the habits and reviews of committees and administrators, so that a process exists to limit bias more thoroughly at an institutional level.