Last week marked another burst of developments in the world university rankings sector, including two ‘under 50’ rankings. More specifically:
- 29 May 2012: QS launches QS Top 50 Under 50 
- 31 May 2012: Times Higher Education (with Thomson Reuters) launches THE 100 Under 50 
A coincidence? Very unlikely. But who was first with the idea, and why would the other ranker time their release so closely? We don’t know for sure, but we suspect the originator of the idea was Times Higher Education (with Thomson Reuters) as their outcome was formally released second. Moreover, the data analysis phase for the production of the THE 100 Under 50  was apparently “recalibrated ” whereas the QS data and methodology was the same as their regular rankings – it just sliced the data different way. But you never know, for sure, especially given Times Higher Education's unceremonious dumping  of QS for Thomson Reuters back in 2009.
Foreign universities entering into agreement with their Indian counterparts for offering twinning programmes will have to be among the global top 500.
The Indian varsities on the other hand, should have received the highest accreditation grade, according to the new set of guidelines approved by University Grants Commission today.
"The underlining objective is to ensure that only quality institutes are permitted for offering the twinning programmes to protect the interest of the students," a source said after a meeting which cleared the regulations on twinning programmes.
They said foreign varsities entering into tie-ups with Indian partners should be ranked among the top 500 by the Times Higher Education World University Ranking  or by Shanghai Jiaotong University of the top 500 universities [now deemed the Academic Ranking of World Universities].
Why does this matter? We’d argue that it is another sign of the multi-sited institutionalization of world university rankings. And institutionalization generates path dependency and normalization. When more closely tied to the logic of capital, it also generates uneven development meaning that there are always winners and losers in the process of institutionalizing a sector. In this case the world's second most populous country, with a fast growing higher education system, will be utilizing these rankings to mediate which universities (and countries) linkages can be formed with.
Now, there are obvious pros and cons to the decision made by India’s University Grants Commission, including reducing the likelihood that 'fly-by-night' operations and foreign for-profits will be able to link up with Indian higher education institutions when offering international collaborative degrees. This said, the establishment of such guidelines does not necessarily mean they will be implemented. But this news item from India, related news from Denmark and the Netherlands regarding the uses of rankings to guide elements of immigration policy (see 'What if I graduated from Amherst or ENS de Lyon… '; 'DENMARK: Linking immigration to university rankings '), as well as the emergence of the ‘under 50’ rankings, are worth reflecting on a little more. Here are two questions we’d like to leave you with.
First, does the institutionalization of world university rankings increase the obligations of governments to analyze the nature of the rankers? As in the case of ratings agencies, we would argue more needs to be known about the rankers, including their staffing, their detailed methodologies, their strategies (including with respect to monetization), their relations with universities and government agencies, potential conflicts of interest, so on. To be sure, there are some very conscientious people working on the production and marketing of world university rankings, but these are individuals, and it is important to set the rules of the game up so that a fair and transparent system exists. After all, world university rankers contribute to the generation of outcomes yet do not have to experience the consequences of said outcomes.
Second, if government agencies are going to use such rankings to enable or inhibit international linkage formation processes, not to mention direct funding, or encourage mergers, or redefine strategy, then who should be the manager of the data that is collected? Should it solely be the rankers? We would argue that the stakes are now too high to leave the control of the data solely in the hands of the rankers, especially given that much of it is provided for free by higher education institutions in the first place. But if not these private authorities, then who else? Or, if not who else, then what else?
While we were drafting this entry on Monday morning a weblog entry by Alex Usher  (of Canada's Higher Education Strategy Associates) coincidentally generated a ‘pingback’ to an earlier entry titled 'The Business Side of World University Rankings .' Alex Usher’s entry  (pasted in below, in full) raises an interesting question that is worth of careful consideration not just because of the idea of how the data could be more fairly stored and managed, but also because of his suggestions regarding the process to push this idea forward:
My colleague Kris Olds recently had an interesting point  about the business model behind the Times Higher Education’s (THE) world university rankings. Since 2009 data collection for the rankings has been done by Thomson Reuters. This data comes from three sources. One is bibliometric analysis , which Thomson can do on the cheap because it owns the Web of Science database. The second is a reputational survey of academics. And the third is a survey of institutions, in which schools themselves provide data about a range of things, such as school size, faculty numbers, funding, etc.
Thomson gets paid for its survey work, of course. But it also gets the ability to resell this data through its consulting business. And while there’s little clamour for their reputational survey data (its usefulness is more than slightly marred by the fact that Thomson’s disclosure about the geographical distribution of its survey responses is somewhat opaque) – there is demand for access for all that data that institutional research offices are providing them.
As Kris notes, this is a great business model for Thomson. THE is just prestigious enough that institutions feel they cannot say no to requests for data, thus ensuring a steady stream of data which is both unique and – perhaps more importantly – free. But if institutions which provide data to the system want any data out of this it again, they have to pay.
(Before any of you can say it: HESA’s arrangement with the Globe and Mail is different in that nobody is providing us with any data. Institutions help us survey students and in return we provide each institution with its own results. The Thomson-THE data is more like the old Maclean’s arrangement with money-making sidebars).
There is a way to change this. In the United States, continued requests for data from institutions resulted in the creation of a Common Data Set (CDS) ; progress on something similar has been more halting in Canada (some provincial and regional ones exist but we aren’t yet quite there nationally). It’s probably about time that some discussions began on an international CDS. Such a data set would both encourage more transparency and accuracy in the data, and it would give institutions themselves more control over how the data was used.
The problem, though, is one of co-ordination: the difficulties of getting hundreds of institutions around the world to co-operate should not be underestimated. If a number of institutional alliances such as Universitas 21  and the Worldwide Universities Network , as well as the International Association of Universities  and some key university associations were to come together, it could happen. Until then, though, Thomson is sitting on a tidy money-earner.
While you could argue about the pros and cons of the idea of creating a ‘global common data set,’ including the likelihood of one coming into place, what Alex Usher is also implying is that there is a distinct lack of governance  regarding world university rankers. Why are universities so anemic when it comes to this issue, and why are higher education associations not filling the governance space neglected by key national governments and international organizations? One answer is that their own individual self-interest has them playing the game as long as they are winning. Another possible answer is that they have not thought through the consequences, or really challenged themselves to generate an alternative. Another is that the 'institutional research' experts (e.g., those represented by the Association for Institutional Research  in the case of the US) have not focused their attention on the matter. But whatever the answer, at the very least, we think that they at least need to be posing themselves a set of questions. And if it's not going to happen now, when will it? Only after MIT demonstrates some high profile global leadership on this issue, perhaps with Harvard, like it did with MITx  and edX ?