You have /5 articles left.
Sign up for a free account or log in.

The common criticisms of global university rankings are well-known. In emphasizing publications and citations, they are relevant for only the most elite research universities – the “top 200” or 500 worldwide. They fail to adequately evaluate or account for the diverse activities of any single institution, not least teaching. Technology transfer is likewise under-weighted – if it is considered at all -- even as two of the three major global rankings rely heavily on subjective reputational data and the third bases 30 percent of its ranking on the numbers of alumni and faculty with Nobel Prizes or Fields Medals,

It is against this backdrop that the creators of U-Multirank, a new initiative funded by the European Commission, argue that they have something different to offer: a ranking that can reflect the diversity of institutional profiles. The philosophy behind the ranking can be captured in the buzzwords “multi-dimensional” and “user-driven.” U-Multirank will collect data on indicators in five broad dimensions – teaching and learning, research, knowledge transfer, international orientation, and regional engagement -- and will allow users to create personalized rankings based on which indicators they’re most interested in and which types of institutions they’d like to compare. Rather than combining and weighting the indicators to assign a single composite score, the ranking will organize universities into subgroups based on their performance on each of the individual indicators.  A user will see, for example, whether a given university is in the top quintile in terms of highly cited research publications and in the third quintile in terms of revenues from continuous professional development courses.  

“We think that if you create a ranking where you say, in the end, ‘University X is number 57 in the world,’ this leads to a situation where the only thing the ranking is about is the question of are you a world brand," said Frank Ziegele, the managing director of Germany’s Centre for Higher Education, a lead partner in the U-Multirank project.  “Are you able to position yourself in the world market and are you tops in certain forms of research, not in all forms of research, probably not in applied research, but basic research that is documented in bibliometric databases? This is what is measured in university rankings. This is fine, but the scope of these rankings is quite limited. They’re only relevant to 1 percent of the whole population of universities in the world. They’re not able to show diversity in profiles.”

"There should be a ranking that shows diversity.”

The Challenges and the Criticisms

U-Multirank, which was piloted with 159 institutions in 2011, just closed registration for its first full cycle, to include an overall institutional ranking and field-based rankings in business, electrical engineering, mechanical engineering, and physics.  Around 700 universities have signed on to participate, more than two-thirds of which are European. Although the numbers may still change slightly, at this point there are 13 universities from the U.S. participating (predominantly large publics), 12 from Australia and 4 from Canada. There are 33 universities from Asia participating, 15 from Africa (not including the Middle East), and 13 from Latin America. Data collection is planned for this fall with the first rankings scheduled to be published in early 2014.

Even before the first ranking is published, however, U-Multirank has attracted a fair amount of criticism, most notably from the British higher education establishment. In a recent policy note distributed to member universities, the U.K. Higher Education International Unit summed up a number of concerns that have been raised by the country’s universities, including the reliance on data self-reported by institutions and fears that "public funding lends legitimacy to U-Multirank and performance as judged by the tool could become the basis for future funding decisions." It concludes that “U-Multirank may harm rather than benefit the sector. It seems likely that a number of ‘leading’ universities will not take part.”

The League of European Research Universities, which represents 21 leading institutions, including the Universities of Cambridge and Oxford, has stated its opposition to the project. “We consider U-Multirank at best an unjustifiable use of taxpayers’ money and at worst a serious threat to a healthy higher education system,” LERU’s secretary general, Kurt Deketelaere told Times Higher Education in February. “LERU has serious concerns about the lack of reliable, solid and valid data for the chosen indicators in U-Multirank, about the comparability between countries, about the burden put upon universities to collect data and about the lack of ‘reality-checks’ in the process thus far.” Deketelaere did not respond to several messages seeking comment for this article.

Questions of availability, comparability and validity of data have dogged the U-Multirank project, which is attempting to measure a variety of dimensions for which there are no common global datasets. In a written response to its critics, the team behind U-Multirank states that every international ranking save for the Shanghai Ranking Consultancy’s uses data self-reported by institutions (though it’s fair to say U-Multirank will use self-reported data to a much greater degree, given the range of indicators to be measured). In addition to self-reported data, U-Multirank will also use information from patent and bibliometric databases and student satisfaction surveys it will conduct. It will not collect reputational data.             

“We are clearly aware of the need for thorough verification procedures and have developed a number of statistical procedures and cross-checks to assure the quality of data,” they write, further noting that while they welcome the development of initiatives “to fill the gaps in publicly available data ... at the moment the only alternative to referring to self-reported data is to limit the scope of indicators substantially and so produce a result which is unrepresentative of the wide mission of higher education institutions as a whole and which is unlikely to be of any use outside of the elite research intensive universities.” 

Richard Holmes, an independent academic and editorial consultant who writes the University Ranking Watch blog, said that the heavy reliance on self-reported data could indeed prove problematic, not mainly because of the potential to manipulate the numbers but because of the difficulty in ensuring consistency in how large, complex institutions collect data even when clear definitions are provided. Even so, Holmes is not alone in detecting a self-serving motivation behind the barrage of criticisms that have been levied against U-Multirank, coming as they are from universities that sit atop the current global rankings tables (including the one published by Times Higher Education).

“I think British universities have found that the Times Higher rankings are quite favorable to them, so they are concerned about a ranking which will measure things where Continental European universities might be a bit better,” Holmes said. At the same time, he continued, “there’s been strong pressure from France and from Germany to some extent for a ranking that would be less British- and American-oriented."

Phil Baty, the rankings editor for Times Higher Education, said it's a legitimate criticism that the established rankings systems favor the research-intensive university model common in England and the U.S. Even so Baty said that the risk is that U-Multirank will be perceived as an EU-funded attempt to make European universities look better in the eyes of the world. (In a House of Lords report from 2012, United Kingdom Minister for Universities and Science David Willetts is quoted as saying that U-Multirank could be viewed as “an attempt by the EU Commission to fix a set of rankings in which [European universities] do better than [they] appear to do in the conventional rankings.")

“In general we have to welcome the initiative," Baty said of U-Multirank. "It does seek to address some of the concerns around rankings. It is about trying to unpack more nuanced areas of performance, and it is admirable in its attempt to provide a bigger picture. But having said that, I think they’re making some extremely bold, ambitious claims and it remains to be seen how well they’ll be able to achieve some of the goals.”

"The starting point for me is that Times Higher Education, working with Thomson Reuters, we have a database of more than 700 universities. The ambition with Thomson Reuters is to build a very comprehensive database of about 1,000 institutions, and I think that’s adequate to create a ranking of a particular type of institution: we’re only looking at globally-competitive research institutions. With U-Multirank they are literally trying to capture excellence across all aspects of an extremely diverse higher education landscape. There are basically 20,000 higher education institutions in the world. Their starting point for me really is that they'll need 20,000 institutions to make it work."

Furthermore, he said, "to drill down to the level of detail they aspire to requires a huge amount of effort from each and every one of those institutions. The project is admirable and exciting but I will be astounded if they’re able to pull it off." 

In other words, Baty argued that a weakness of the conventional rankings -- their narrowness -- is also their strength, while for U-Multirank, its breadth, while admirable, could be its Achilles heel.

The Effort: Delving Into the Indicators

To get a picture of the breadth of indicators to be measured, under the teaching dimension, for example, U-Multirank will consider a mix of “objective” measures including graduation rates and faculty-student ratios, as well as responses to a survey designed to gauge student satisfaction on a variety of fronts, including the quality of courses, contact with instructors, and facilities. For the knowledge transfer dimension, indicators include joint research publications with industry, patents per full-time academic staff member, the average number of spin-offs, revenues from continuous professional development, and income from external sources, including consultancies, royalties, and clinical trials. 

A full list of indicators to be included in the institutional and field-based rankings, respectively, can be found here. “We’re trying to think about very simplistic, easy-to-gather data, which do not imply a lot of work for institutions and can nevertheless indicate something about what is going on in teaching and learning or regional engagement,” said Frans van Vught, an honorary professor at the University of Twente, in the Netherlands, and, with Ziegele, co-project leader. Indicators on regional engagement, for example, include joint publications with other researchers in the region and the percentage of graduates working in the region (though as the feasibility study that resulted from the pilot of U-Multirank makes clear, even the basic issue of how to define “region” is vexed, particularly in a non-European context).

As a report by the European University Association on rankings released earlier this year points out, there are multiple indicators that were determined to be of “questionable feasibility” during the pilot phase that are still included in U-Multirank, including those measuring income from regional sources, art-related outputs (such as exhibition catalogues or musical compositions), and graduate employment rates. Ziegele acknowledged there are challenges in regards to data availability or comparability for these indicators, but said that for graduate employment rates, for example, stakeholders thought it essential that U-Multirank try to begin collecting comparable data on this point. “We want to send a signal that this is something universities should care about." At the same time, he said, U-Multirank will not publish results for any indicator if it is determined that the underlying data are weak.

“If you're not happy with the situation at the moment you should not give up," Ziegele said. "You should try to go down that road and find a way and of course if in the first round we have the feeling there are six indicators where we really can't say this is good data, then we won't publish those six indicators. That’s the advantage of the multi-dimensional approach. You can leave out some indicators and be flexible.”

The immense dataset will be presented via a Web tool that is still in development. Although U-Multirank’s creators are firm on the “user-driven” nature of the ranking, they do expect they will be producing some “pre-defined” rankings of their own in various dimensions (research excellence, for example). “Our methodological and epistemological position would be that there is no best ranking, so if we provide our own we are only one of the many: we might be better informed, we might be those who happened to be creating this, but nevertheless, we are only one selector of indicators according to our values and preferences,” van Vught said.

A European Ranking or a Global One?

An Organization for Economic Cooperation and Development report published in December described U-Multirank as "by far the most significant attempt to overcome many limitations of prevailing rankings."

"We have to say this is a significant improvement," said Ellen Hazelkorn, author of Rankings and the Reshaping of Higher Education (Palgrave Macmillan, 2011) and the vice president for research and enterprise and dean of the Graduate Research School at Dublin Institute of Technology.  "It’s certainly raising a whole lot of other questions about higher education. The problems it’s encountering are symptomatic of the general problems of measuring and comparing quality" -- those problems, she said, being the difficulty of finding meaningful indicators that are actually reflective of quality and not just intensity (the idea that more of something is necessarily better) or inputs. 

"The question," Hazelkorn said, "is will it displace the other rankers or will decision-makers continue to look for simple solutions to complex problems?"

Related to that, one of the many open questions at this point is whether U-Multirank will gain traction outside of Europe. The plan all along has been to be a predominantly but not exclusively European ranking: "What we want to have, more or less, is the full coverage of Europe plus the relevant world-wide benchmarks," Ziegele said. 

Alex Usher, the president of the Toronto-based Higher Education Strategy Associates, is a fan of the U-Multirank project. As he puts it, it’s "the type of rankings we would have had if universities had been smart enough to get out front of the whole rankings phenomenon."  Still, he's cautious about its chances for success outside Europe. Usher notes that participation requires significant investment of time on the part of institutions and it’s not clear what they get in return. “In Canada, I’ve spoken to a number of institutions, and one of the concerns I’ve heard back from them is we don’t get any data out of this. And who the heck is going to read this? It would be different if they could guarantee an Asian audience, because that’s where most of our foreign students come from, but if it’s seen as being European and internal, and everyone else is there for show so they can benchmark themselves against them, what’s in it for them?"

Furthermore, he said, “even if students look at it, will they have the faintest idea of what to do with multidimensional rankings?” He recalled that in his own experience of running multidimensional rankings for the Toronto Globe and Mail students would input their preferences and generate a ranking, and then they would look up and ask “which one’s the best?" The concept of which one's the best for you did not compute.

Even so, Usher thinks it’s a worthwhile enough effort that he’s encouraging institutions to participate. As he wrote on his blog, “This is a good faith effort to improve rankings; failure to support it means losing your moral right to kvetch about rankings ever again.”

 

Next Story

Found In

More from Global