You have /5 articles left.
Sign up for a free account or log in.
I have a confession. The rankings of the world’s top universities that my magazine has been publishing for the past six years, and which have attracted enormous global attention, are not good enough. In fact, the surveys of reputation, which made up 40 percent of scores and which Times Higher Education until recently defended, had serious weaknesses. And it’s clear that our research measures favored the sciences over the humanities.
We always knew that rankings had their limitations. No ranking can be definitive. No list of the strongest universities can capture all the intangible, life-changing and paradigm-shifting work that universities undertake. In fact, no ranking can even fully capture some of the basics of university activity – there are no globally comparable measures of teaching quality, for example.
But we do believe that rankings have some real uses, and love them or hate them, they are here to stay. Rankings help students select courses, help faculty make career choices, help department heads choose new research partners and help university managers set strategic priorities.
Research from an American think tank, the Institute for Higher Education Policy, published last year criticized some of the effects of rankings, but also found that they can “prompt change in areas that directly improve student learning experiences,” can “encourage institutions to participate in broader national and international discussions” and “foster collaboration, such as research partnerships, student and faculty exchange programs.” All positive.
The IHEP report, “Impact of College Rankings on Institutional Decision Making,” also found that rankings “play an important role in persuading the government and universities to rethink core national values."
As nations reshape their economies – through heavy investment in higher education – for a knowledge- and innovation-driven future, worldwide rankings are set to play an ever more influential role.
In this context, those of us who seek to rank universities have a responsibility to make them as rigorous and balanced as possible.
This is where we have fallen short in the past. An opinion survey of academics and university administrators from around the world, published earlier this month by Thomson Reuters, found that the sector generally valued rankings. But it also revealed “unfavorable” perceptions of the indicators and methodology used and uncovered “widespread concern about data quality,” especially among higher education professionals in North America and Europe.
We have received a lot of flak over our rankings, from people taking issue with our data and methodology and from others more fundamentally opposed to the idea that you can or should seek to, rank universities at all.
One of the most stinging attacks came from Andrew Oswald, one of the world’s leading economists and a professor at Britain’s University of Warwick. In late 2007, he mocked the pecking order of that year’s rankings in my magazine, in which the UK’s Universities of Oxford and Cambridge were tied for second, while Stanford University was 19th – despite having “garnered three times as many Nobel Prizes over the past two decades as the Universities of Oxford and Cambridge did combined," he wrote.
He noted with contempt that the University of California at Berkeley was rated equal to the University of Edinburgh at 22nd position. “The organizations who promote such ideas should be unhappy themselves, and so should any supine universities who endorse results they view as untruthful,” he wrote.
Universities are, of course, in the “truth business”, as Professor Oswald put it. Journalists should share that mission. Rankings can never arrive at “the truth” – too many judgment calls have to be made, too many proxies for complex activities have to be employed – but they can get closer to the truth, by being more rigorous, sophisticated and transparent.
Our annual world university rankings were first published in 2004 – when Times Higher Education magazine not only had a different editor, but a different owner.
Last year, under Ann Mroz, a new editor who felt uncomfortable with the rankings we had been publishing, and following a re-launch of the publication, the first major window of opportunity for a wholesale review of the rankings emerged. We took the chance to act. We decided to start again. We felt that the international university sector deserved something more credible – a serious tool rather than an annual curiosity or marketing opportunity.
We are now working with the world-leading research data specialists Thomson Reuters, which will collect and analyze all the data for our rankings for 2010 and beyond.
So what was so bad about our previous efforts? Of most concern was the so-called “peer review” score. Some 40 percent of a university’s overall ranking score was based on the results of a “peer review” exercise – in fact a simple opinion survey of academics, asking them which institutions they rated most highly. In many respects, it has been quite similar to the reputation survey used by U.S. News & World Report for ranking American colleges.
Many object in principle to the use of any subjective measures in rankings, arguing that they reflect past, not current, performance, they are based on stereotype or even ignorance, and that a good or bad reputation may be mindlessly replicated. The United States has had its fair share of scandal here, most memorably last year when Inside Higher Ed revealed that one former American university official had told a conference that her colleagues routinely gave low ratings to all programs other than their own institution’s, in an influential reputation survey.
But the use of a reputation measure was endorsed in the Thomson Reuters survey. Some 79 per cent of respondents said that “external perception among peer researchers” was a “must have” or ”nice to have” indicator. And we believe that it can provide useful context, measuring things that simple quantitative data can not.
But to be useful such surveys must be handled properly and there’s the rub.
The reputation survey carried out by our former ranking partner attracted only a tiny number of respondents. In 2009, about 3,500 people provided their responses – a fraction of the many millions of scholars throughout the world. The sample was simply too small, and the weighting too high.
So we’ve started again. For 2010, Thomson Reuters has hired pollsters Ipsos MediaCT to carry out the reputation survey, and it has committed to gathering a higher number from a respondent pool that truly represents the international university community. We are not just looking to increase volume alone for its own sake. We are seeking to achieve greater volume combined with a much better targeted sample. So the key is a wider spread of people, more accurately reflecting real global higher education demographics.
The questions have been carefully prepared to elicit meaningful responses and to make clear what is being judged. Instead of a simple, generic, "who's best," we will ask more detailed questions designed to elicit more informed and consistent answers. We will ask people to judge quality in their field, in their region and also globally. We will ask questions about both teaching quality and research quality, and we will ask carefully prompted questions, to produce more meaningful responses, such as asking them which institutions produce the best graduate applicants, or where they might recommend their top undergraduates should apply for the best graduate programs. A “platform group” of more than 50 leading university principals, presidents and vice chancellors from around the world has been established to help scrutinize the results and to ensure that responses reflect the true demographics of international higher education. Perhaps most significantly, the opinion poll will go only to invited participants – we will not adopt a scatter gun, mass mailing approach to collect the responses of anyone who cares to respond.
The rest of the rankings methodology is also open for a complete review, and we are seeking informed opinion.
We have already responded to criticisms and declared that we will change the way research excellence is measured, to take into account the dramatically different citation habits between disciplines, ending the clear bias against the arts, humanities and social sciences in our old methodology.
But we need more input. Is a staff-to-student ratio a suitable proxy for teaching quality and what weighting should it have? Is it fair to give credit for the proportion of international students on a campus when there’s no way of judging the quality of those students? What other indicators should we avoid; what should we include?
Some may believe that the major changes we are making are being made cynically. At a joint Times Higher Education-University of Nottingham conference last month, Alice Gast, president of Lehigh University, argued that magazines that compile rankings have an interest in creating instability. Dramatic movements in the rankings keep university marketing departments busy, make news headlines and help the circulation figures. She has a good point. By changing our rankings we will inevitably create more instability, especially between 2009, the last of our old rankings, and 2010.
But Times Higher Education magazine primarily serves the international university community - not a mass consumer audience -- so the credibility of our rankings with a highly engaged and intelligent audience is paramount. Our community can easily spot -- and would not easily forgive -- a cynical effort. So much rests on the results of our rankings -- individual university reputations, student recruitment, vice chancellors' and presidents' jobs in some cases, even major government investment decisions. We have a duty to overhaul the rankings to make them fit for such purposes.