You have /5 articles left.
Sign up for a free account or log in.

Britney Spears CDs. McDonald's hamburgers. Sylvester Stallone movies. To the list of exports for which the United States might be less than fully proud, one might add commercially published rankings of colleges and universities. Since U.S. News & World Report first published its annual list of the best colleges 25 years ago, the phenomenon has exploded not only in the United States but in countries such as Australia, Canada, China, Italy, and Spain, often (perhaps not coincidentally) closely following the introduction of policies that require students and parents to pay their own tuition.

A new report by the Educational Policy Institute finds that many of the strengths and especially the perceived weaknesses that so many American college officials find in the U.S. News rankings -- above all, the approach of weighting indicators to come up with one overarching "score" of quality -- are replicated in the comparable lists in other countries. But the authors of the study also identify, in Germany, a possible way out, in which colleges there band together to collect and publish much more data and information than they now do in a way that enables individual consumers to decide which institutions to rank, and on what basis.

The study, "A World of Difference: A Global Survey of University League Tables," was co-written by Alex Usher, a vice president at the policy institute, a research associate there. It examines 16 national rankings from 10 countries and three systems that attempt to rate institutions from across the globe (one of the three, which was formerly published by Asiaweek, is now defunct).

The rankings share some things in common. Most are produced by commercial publishing companies, and as a result are designed to tell a "story" that consumers can easily grasp. To do that, most try to boil a complex set of factors and data down into what the study calls "single, comparable" -- and, one might add, easily digestible -- "numerical indicators" of quality. The designers of each ranking system, of course, decide what measures to use and how to weigh them, and therefore define quality in their own way.

"It is no exaggeration to say that when publishers advertise their product as a guide to 'the best' institutions, it is the publishers themselves who largely decide the best simply through their choice of indicators and weightings," the study says. "In effect, the act of choosing a set of indicators and weightings imposes a definition of 'quality.' "

The different systems use what the study calls a "bewildering array" of indicators, due partly to the varying availability of data measures and partly to the designers' differing perceptions of what quality is, and then weight them in wildly differing ways; the four different ranking systems in China, for instance, place much more emphasis on research measures, virtually ignoring teaching, while British systems focus more on staff quality.

The authors find, a bit to their surprise, that despite the great variation in the indicators the different ranking systems use, the same institutions tend to rise to the top of most of them, although the gap between the results for individual colleges widens the further down the rankings you go. The study suggests that this is because there is "some 'dark matter' or 'X factor' exerting a gravitational pull on all ranking schemes such that certain institutions or type of institutions .. would rise to the top regardless of the specific indicators and weightings used." The authors' (speculative) candidates for those non-measured "X factors": "age of institution," "faculty size," or "per/student expenditure."

Because of the problems they perceive with the subjectivity of the rankings, the authors suggest that there may be a better way for consumers and others to measure institutional quality -- and they find it in Germany. There, a think tank called the Centre for Higher Education Development, working with a publisher, Die Zeit, publish a ranking ( German language only) of individual academic departments based on extensive surveys of students and professors and data gathered independently of the departments themselves (the latter a crucial factor, says Usher, because it "takes the massaging of the data" that some colleges engage in out of the process).

The German ranking does not weight or aggregate the scores on individual indicators into a common "grade," nor does it in any way assign the institutions in a numerical order. It classifies the departments into thirds (top, bottom, middle) on each individual indicator, and it allows individual users to sort the weightings and rankings in its database to compare institutions and departments in the way they choose.

"In so doing, the CHE/DAAD approach effectively cedes the power of defining 'quality' -- which, as we have seen, is one of the key roles arrogated by the authors of ranking schemes -- to consumers of the ranking system (i.e., prospective university students and their parents or sponsors)," the study finds.

While some leaders in American higher education have tiptoed in this direction -- the Graduate Management Admission Council, for instance, is working on a database that would make admissions, enrollment, placement and other information about business schools available to the public in a move designed as an alternative to business school rankings -- college officials have done relatively little to back up their perpetual complaining about existing rankings with concrete steps to produce an alternative, Usher says.

"Institutions deserve the ranking systems they get," he says. "If they don't like the way they're being ranked, it's often because they're not willing to share the information that consumers really want."

For better or worse, pressure to collect and publish readily comparable, "user friendly" information about colleges in the United States may come from outside higher education, in the form of the Secretary of Education's Commission on the Future of Higher Education.

Next Story

Written By

More from News