You have /5 articles left.
Sign up for a free account or log in.
Editors' note: if you are exploring GlobalHigherEd today please be aware that we've loaded up two entries today regarding the world university rankings issue. These entries are designed to be complementary, though they clearly operate at different levels. Our sincere thanks to Pablo Achard of the University of Geneva for his very thoughtful guest entry ('Rankings: a case of blurry pictures of the academic landscape?'). Needless to say, the entry written below is an invitation to critically reflect about world university ranking futures.
Best wishes, Kris Olds & Susan Robertson
~~~~~~~~~~~~~~~~~~~~~~
Is it now time to ensure that world university rankers are overseen, if not governed, so as to achieve better quality assessments of the differential contributions of universities in the global higher education and research landscape?
In this brief entry we make a case that something needs to be done about the system in which world university rankers operate. We have two brief points to make about why something needs to be done, and then we outline some options for moving beyond today’s status quo situation.
First, while both universities and rankers are all interested in how well universities are positioned in the emerging global higher education landscape, power over the process, as currently exercised, rests solely with the rankers. Clearly firms like QS and Times Higher Education are open to input, advice, and indeed critique, but in the end they, along with information services firms like Thomson Reuters, decide:
- How the methodology is configured
- How the methodology is implemented and vetted
- When and how the rankings outcomes are released
- Who is permitted access to the base data
- When and how errors are corrected in rankings-related publications
- What lessons are learned from errors
- How the data is subsequently used
Rankers have authored the process, and universities (not to mention associations of universities, and ministries of education) have simply handed over the raw data. Observers of this process might be forgiven for thinking that universities have acquiesced to the rankers’ desires with remarkably little thought. How and why we’ve ended up in such a state of affairs is a fascinating (if not alarming) indicator of how fearful many universities are of being erased from increasingly mediatized viewpoints, and how slow universities and governments have been in adjusting to the globalization of higher education and research, including the desectoralization process. This situation has some parallels with the ways that ratings agencies (e.g., Standard and Poor’s or Moody’s) have been able to operate over the last several decades.
Second, and as has been noted in two of our recent entries:
the costs associated with providing rankers (especially QS and THE/Thomson Reuters) with data are increasing concentrated on universities.
On a related note, there is no rationale for the now annual rankings cycle that the rankers have been successfully been able to normalize. What really changes on a year-to-year basis apart from changes in ranking methodologies? Or, to paraphrase Macquarie University's vice-chancellor, Steven Schwartz, in this Monday’s Sydney Morning Herald:
"I've never quite adjusted myself to the idea that universities can jump around from year to year like bungy jumpers," he says.
''They're like huge oil tankers; they take forever to turn around. Anybody who works in a university realises how little they change from year to year.''
Indeed if the rationale for an annual cycle of rankings were so obvious, government ministries would surely facilitate more annual assessment exercises. Even the most managerial and bibliometric-predisposed of governments anywhere – in the UK – has spaced its intense research assessment exercise out over a 4-6 year cycle. And yet the rankers have universities on the run. Why? Because this cycle facilitates data provision for commercial databases, and it enables increasingly competitive rankers to construct their own lucrative markets. This, perhaps, explains this 6 July 2010 reaction, from QS to a call for a four vs one year rankings cycle in GlobalHigherEd:
Thus we have a situation where rankers seeking to construct media/information service markets are driving up data provision time and costs for universities, facilitating continual change in methodologies, and as a matter of consequence generating some surreal swings in ranked positions. Signs abound that rankers are driving too hard, taking too many risks, while failing to respect universities, especially those outside of the upper echelon of the rank orders.
Assuming you agree that something should happen, the options for action are many. Given what we know about the rankers, and the universities that are ranked, we have developed four options, in no order of priority, to further discussion on this topic. Clearly there are other options, and we welcome alternative suggestions, as well as critiques of our ideas below.
The first option for action is the creation of an ad-hoc task force by 2-3 associations of universities located within several world regions, the International Association of Universities (IAU), and one or more international consortia of universities. Such an initiative could build on the work of the European University Association (EAU) which created a regionally-specific task force in early 2010. Following an agreement to halt world university rankings for two years (2011 & 2012), this new ad-hoc task force could commission a series of studies regarding the world university rankings phenomenon, not to mention the development of alternative options for assessing, benchmarking and comparing higher education performance and quality. In the end the current status quo regarding world university rankings could be sanctioned, but such an approach could just as easily lead to new approaches, new analytical instruments, and new concepts that might better shed light on the diverse impacts of contemporary universities.
A second option is an inter-governmental agreement about the conditions in which world university rankings can occur. This agreement could be forged in the context of bi-lateral relations between ministers in select countries: a US-UK agreement, for example, would ensure that the rankers reform their practices. A variation on this theme is an agreement of ministers of education (or their equivalent) in the context of the annual G8 University Summit (to be held in 2011), or the next Global Bologna Policy Forum (to be held in 2012) that will bring together 68+ ministers of education.
The third option for action is non-engagement, as in an organized boycott. This option would have to be pushed by one or more key associations of universities. The outcome of this strategy, assuming it is effective, is the shutdown of unique data-intensive ranking schemes like the QS and THE world university rankings for the foreseeable future. Numerous other schemes (e.g., the new High Impact Universities) would carry on, of course, for they use more easily available or generated forms of data.
A fourth option is the establishment of an organization that has the autonomy, and resources, to oversee rankings initiatives, especially those that depend upon university-provided data. There are no such organizations in existence for the only one that is even close to what we are calling for (the IREG Observatory on Academic Ranking and Excellence) suffers from the inclusion of too many rankers on its executive committee (a recipe for serious conflicts of interest), and member fees for a significant portion of its budget (ditto).
In closing, the acrimonious split between QS and Times Higher Education, and the formal inclusion of Thomson Reuters into the world university ranking world, has elevated this phenomenon to a new ‘higher-stakes’ level. Given these developments, given the expenses associated with providing the data, given some of the glaring errors or biases associated with the 2010 rankings, and given the problems associated with using university-scaled quantitative measures to assess ‘quality’ in a relative sense, we think it is high time for some new forms of action. And by action we don’t mean more griping about methodology, but attention to the ranking system that universities are embedded in, yet have singularly failed to construct.
The current world university rankings juggernaut is blinding us, yet innovative new assessment schemes -- schemes that take into account the diversity of institutional geographies, profiles, missions, and stakeholders -- could be fashioned if we take pause. It is time to make more proactive decisions about just what types of values and practices should be underlying comparative institutional assessments within the emerging global higher education landscape.