You have /5 articles left.
Sign up for a free account or log in.

We're in an era when many professors fear that student evaluations -- either the formal kind sponsored by colleges or the informal kind found on places like RateMyProfessors.com -- may play too large a role in whether they earn tenure or raises or, in the case of adjuncts, whether they are hired back.

A new study by three economists at Ohio State University may add to those fears. Previous studies have found that students are more likely to give good reviews to instructors who are easy graders or who are good looking. The Ohio State study -- in many ways larger and more ambitious than previous ones -- found a strong correlation between grades in a course and reviews of professors, such that it is clear that students are rewarding those who reward them.

That finding alone, however, may not negate the value of student evaluations. One explanation could be that good students are earning good grades, and crediting their good professors for their learning. The Ohio State study, however, provides evidence for the more cynical/realistic interpretation -- namely that professors who are easy (and aren't necessarily the best teachers) earn good ratings. The way the Ohio State team did this was to look at grades in subsequent classes that would have relied on the learning in the class in which the students' evaluations were studied. Their finding: no correlation between professor evaluations and the learning that is actually taking place.

In another finding of concern, the study found evidence that students -- controlling for other factors -- tend to give lesser evaluations to instructors who are women or who were born outside the United States. And they found this despite not finding any correlation between instructor identity and the level of learning that took place.

While there may be ways to improve the reliability of student evaluations, the authors write, "we believe that any student evaluations are best used in conjunction with peer reviews of teaching."

The study was just released by the National Bureau of Economic Research. (An abstract is available here, where the paper may also be downloaded for $5.) The authors are Bruce A. Weinberg, Belton M. Fleisher and Masanori Hashimoto.

In their study, the authors evaluated data from 50,000 enrollments in 400 offerings over a period of years of principles of microeconomics, principles of macroeconomics, and intermediate microeconomics.

While the study is generally critical of the accuracy of student evaluations, it suggests -- when looking at both grading and learning for correlation -- that there are other possibilities for explaining why students' evaluations don't correlate with actual learning.

One explanation, for example, is that students don't themselves have a good sense of how much they are learning. The authors stress that there are many ways -- such as adjusting for student bias for easy graders or bias against certain groups of instructors -- to continue to use student evaluations as one tool for measuring professors' performance. But they write that, used alone and unadjusted, they appear highly questionable.

Next Story

Written By

More from News