You have /5 articles left.
Sign up for a free account or log in.

The single greatest part of U.S. News & World Report's formula for ranking undergraduate colleges is also the most controversial: the "peer rankings" in which college presidents rank all similar institutions. Criticism of the system as unfair has grown, leading many liberal arts college presidents to boycott this part of the system. The rankings system was also the subject of much ribbing when Clemson University released records showing that its president ranked his university above every other one in the country.

Rankings will be much in the news in the weeks ahead, with the latest from U.S. News due out this week and the doctoral program rankings of the National Research Council, using a new methodology, due out some time soon.

Peer rankings also matter a lot in the U.S. News graduate rankings, and a new study raises questions about whether the peer rankings -- done by deans and others in the graduate fields -- may favor some of the same characteristics covered by other parts of the methodology, rewarding some kinds of graduate programs over others. The study -- to appear in a forthcoming issue of the journal Research in Higher Education -- was conducted by Kyle Sweitzer of the Office of Planning and Budgets at Michigan State University and Fred Volkwein of the Center for the Study of Higher Education at Pennsylvania State University.

Defenders of the peer assessments tend to say that they allow for experts to give attention to colleges that may be true to their missions or have unique qualities that may not fit into the rest of the U.S. News methodology. Robert Morse, who directs the rankings for the magazine, recently blogged that peer assessments for undergraduate programs allow those who fill out the forms help "account for intangibles such as faculty dedication to teaching."

The new study found less matching with intangibles than with specific qualities that may not always associate with quality or with the missions of some programs. For example, the study found that peer assessments correlate with the size of programs in all five areas analyzed: business, education, engineering, law and medicine. Peer reviewers also seem to place a high emphasis on standardized test scores, with the average score significant for all of the graduate categories except education. Test scores also appear to have the greatest influence on the reputation (as measured by the survey) in law and medical schools.

Other factors that are "significant" in at least four of the graduate areas where U.S. News collects peer assessments are faculty productivity, as measured by per capita publications, and tuition. Student-faculty ratios have no relationships in three of the categories and only a weak connection in the other two. (The peer assessments count for 25 percent of the score in business, education, engineering and law and 20 percent in medicine.)

The study concludes by raising questions about the validity of the peer assessments in several areas -- largely similar to criticisms made of the undergraduate peer assessments.

"Just as there are many problems with rankings at the undergraduate level, there are similar concerns with rankings at the graduate level," the authors write. "For example, graduate rankings that include standardized tests in their methodology encourage graduate programs to place a greater emphasis on such tests, perhaps turning their back to the value of a more diverse student body. Another issue regarding the USNWR rankings is the emphasis the magazine places on the reputational surveys of deans and directors. The weight given to the USNWR peer assessment survey certainly suggests the significance of the resource/reputation model of quality."

Then there is the question of the time spent trying to influence dean who will be ranking your graduate school.

"It is difficult, perhaps impossible, to know how accurately the perception of deans and admissions directors matches real quality in graduate education, as distinct from the large amount of marketing and promotional material that schools produce and distribute to their peers, not coincidentally around the same time the USNWR surveys are mailed. It is likely that schools could better utilize their limited resources by focusing their efforts on their students and faculty, rather than on those who will be rating them in a magazine," write the authors.

Morse, however, saw the study as bolstering the argument on behalf of the surveys. He said, for example, that he shares the view that large programs do better. "One way of being a top school is to have strength in a number of areas," he said. "If you just have one strong department," as may be the case with smaller programs, that's a relevant way to rank.

Over all, while stressing that he had not had time to study the analysis with care, Morse said "I think it proves that reputation is a valid indicator since it's correlated with academic indicators," such as those the study cites.

Next Story

Written By

More from News