You have /5 articles left.
Sign up for a free account or log in.

CHICAGO -- Student evaluations continue to play a major and outsize role in how departments of economics evaluate teaching, according to a paper presented here Friday at the annual meeting of the American Economic Association in Chicago.

The paper, presented by William Bosshardt, an associate professor of economics at Florida Atlantic University (William E. Becker, professor emeritus of economics at Indiana University, and Michael Watts, economics professor at Purdue University, are the co-authors), mirrored many of the findings of similar research more than a decade ago. For the 182 departments surveyed, student evaluations made up 50 percent of a teacher's evaluation on average, with 20 percent of surveyed departments reporting that the weight given to student evaluation was at 75 percent.

"[W]e find that the evaluation of instruction tends to rely heavily and almost exclusively on SETs [student evaluation of teaching], with almost every department using them in the evaluations of faculty,” the paper said. “In short, it appears the conclusions from the 1999 study remain – the relatively lower cost of SET data is sufficient to justify their nearly exclusive use.” The use (and growth) of online student evaluations might spur this trend even further, though there are valid reasons why student evaluations should not be the dominant measure of teaching, according to the authors.

Student evaluations and a heavy reliance on them can be problematic for several reasons, the authors of the paper argued. Departments can misinterpret these evaluations by comparing averages for all instructors in similar courses, which can be a very imprecise measure. Moreover, instructors may alter their teaching methods solely to boost their student evaluation scores.

The potential problems, according to the paper.

  • Teachers might try to entertain and not educate. “To instructors, generating positive student answers to questions about overall effectiveness and communication skills may smack of entertainment and dumbing down,” the paper says.
  • Professors might try to drive out malcontents or otherwise unhappy students before the end-of-semester evaluations.
  • Instructors might avoid attempts at innovation and play it safe in the classroom just to get better evaluations.

Student evaluations are a partial and incomplete measure, the authors of the paper say, and ideally there should be a more organized and comprehensive way to measure teaching effectiveness. “It is not at all clear that has happened in economics or other fields over the past decade,” according to the paper. The reasons for the popularity of student evaluations are obvious. Administrators love having ready numbers to work with, and student evaluations are less expensive to administer than other methods of teaching evaluation.

Other methods of evaluation, such as peer reviews, continue to be rare; about half of the surveyed departments used peer reviews, even though some experts say they may be a more sophisticated approach. Of the institutions that do peer reviews, about 30 percent of them do them annually. Such reviews tend to be more common during promotion and tenure, Bosshardt said.

Another evaluation measure – curriculum development and instructional research – was even rarer. “The weight placed on curriculum development and instructional research in forming the overall teaching evaluation averaged 8.7 percent,” the paper says. About 40 percent of the surveyed departments gave no weight to curriculum development and instructional research.

The biggest change in the last decade has been the beginning of online student evaluations, the report concludes, with about 35 percent of surveyed departments using them. But how they affect evaluations may be a matter for more research, Bosshardt said.

 

Next Story

More from News