Could Be Right?

Study finds correlation between ratings professors receive on much-derided site and through official student evaluations.
June 5, 2007

What if -- the site that professors love to hate -- is more accurate than they think? Or what if officially sanctioned student evaluations of faculty members -- which many professors like to contrast with -- are just as dubious as RateMyProfessors?

Those are questions raised by a new study by two professors at the University of Maine who compared the ratings on of 426 Maine instructors with the formal student evaluations used by the university. The results were just published in the journal Practical Assessment, Research & Evaluation. The key findings are that ratings have a significant correlation with the formal student evaluations on the questions about the overall quality of the course and the relative difficulty or ease of the course.

Complaints about are widespread. Because the site doesn't seek representative samples of students, or even ensure that students are ranking professors whose courses they have taken, a challenging professor may receive low scores from students who never did the work and an instructor who gives everyone A's may be nominated for professor of the year. Studies have found that the best way to score well on is to look "hot" and be an easy grader.

So what does it mean if has a high correlation with the kinds of student evaluations that colleges see as more valid? "The results of our study are meaningful only insofar as one regards student evaluations of teaching as meaningful," said Ted Coladarci, professor of educational psychology at Maine and the author of the new study, along with his Orono colleague, Irv Kornfield. Someone who believes that formal students evaluations "are of little value" would find their correlation with RateMyProfessors to be "at best, entertaining," Coladarci said.

But for those, like the authors, who believe that student evaluations have meaning, the correlation should give them pause about criticizing, he said. "Our attempt is not to persuade non-believers to believe," he said, but the results "will come as a surprise to RMP skeptics."

At the same time, Coladarci cautioned that the correlation isn't universally high. The overlap is highest among those professors who are popular on -- they also do extremely well with traditional student evaluations. "The pattern of this association suggests that when an instructor's RMP overall quality is particularly high, one can infer that the instructor 'truly' is regarded as a laudatory teacher," the study says. However, the correlations are much weaker for those who don't score well, so Coladarci is much more hesitant to assume that poor ratings are equally meaningful.

While the paper finds more validity in than many professors would like to see, the study may reassure professors who don't score well on the Web site's "hotness" rankings, which earn select professors a chili pepper. The Maine professors found no significant quality correlations between earning a chili pepper and good scores on traditional student evaluations -- suggesting that (at least at Maine) students are paying attention to what professors say rather than to their appearance.

In fact, the article -- which on the whole is more positive to than most academic research on the service -- calls the chili pepper "a frivolous distraction that compromises the credibility of RMP."

As a result of their research, the Maine professors offer two recommendations -- both of which are sure to be controversial and one of which they admit to having mixed feelings about. The recommendation that the professors make without hesitation is that colleges put their official student evaluations online. "Although students doubtless would applaud this move, many faculty would oppose it because of genuine concerns about privacy and the negative consequences," the professors write. And indeed moves to put evaluations online have been controversial at some campuses.

But the article adds that "privacy is a thing of the past in the age of RMP, MySpace, and the like," adding that not making such evaluations available creates its own set of problems. "Students will rely on what is publicly available," and will thus not always be accurate in their assumptions, the Maine professors write.

The recommendation on which the authors admit to some "ambivalence" is this: "Predicated on the belief that is not going to go away, higher education institutions should consider encouraging their students to post ratings and comments on RMP," they write. If a larger sample of students participate -- and they are encouraged to be responsible in their rankings -- "the potential value of that information to the institution would only be enhanced," they write.

And what about those chili peppers? "Appealing to students' sense of decency and fair play, furthermore, the institution could endeavor to discourage students from rating the hotness of the instructor."


Be the first to know.
Get our free daily newsletter.


Back to Top