You have /5 articles left.
Sign up for a free account or log in.

In a blog published in Inside Higher Ed, Liz Reisberg raised the issue of “rankings and quality”. She made good points about major contradictions between traditional rankings and quality management, but left out a vital third option—a way of reconciling rankings and quality.

All over the world the wish for more transparency about the quality in higher education is being voiced. Stakeholders (students, their parents, funders, clients) are no longer satisfied with peer-review based quality assurance systems. The critique of these systems is increasingly addressing the introvertive and sometimes even self-congratulatory quality assurance methodologies that are being used to present quality judgments. Furthermore, external stakeholders are not receiving digestible information out of evaluation reports written by experts for experts.

Global rankings appear to fill a need. They offer information to stakeholders who are unable and/or unwilling to engage in the sorting through the cumbersome processes of higher education quality assurance and/or who distrust their peer-review based approaches.

But the methodologies of the traditional rankings are problematic because of simplistic assumptions about validity, and incorrect statistical operations (particularly regarding composite indicators and the creation of one-dimensional league tables). League tables may satisfy media needs for headlines (“The number one is…”) yet they tend to exaggerate differences in performance between universities and they provide a false impression of exactness (“Number 27 is better than number 29” when in fact differences between the two positions could be marginal). By declaring a certain university as the “best in the world”, they patronize ranking users with suspect quality standards while the composite score tends to hide their standards behind weighted subjective indicators and non-transparent methods. Most academics are critical of these methodologies, but, rather than providing alternatives, they tend to retreat to traditional accreditation and evaluation systems. The wish for transparency remains unsatisfied.

Quality is in the eye of the beholder, in higher education as in most things. There is a need for transparent instruments that allow stakeholders to make their own quality assessments (a ‘user-driven approach’), based on comparative information to address the differences in multiple purposes, the range of activities and profiles of higher education institutions. The available information on the performance of higher education institutions focuses mainly on research-intensive universities, and thus covers only a very small proportion of the world of higher education. It is essential to draw on a wider range of analysis and information, covering all aspects of performance, like teaching & learning, knowledge transfer, internationalisation, regional engagement, as well as research to help students make informed study choices, to enable institutions to identify and develop their strengths, and to support policy-makers in strategic choices in regard to the reform of higher education systems.

U-Multirank (www.umultirank.org) is such a new transparency tool, offering a more complete picture of the diversity of university performance. It does not produce one-dimensional league tables (dominated by bibliometric research impact indicators), but rather addresses a multiplicity of dimensions of higher education and research; it is user-driven and allows stakeholders to compare university profiles based on a "like with like principle". Users can decide which areas of performance to include in the comparison of the selected group of universities; in this way U-Multirank produces personalised rankings. Student representatives called this a “democratization of rankings”. In addition it allows comparative analyses for more than 1,200 higher education institutions from 83 countries worldwide as well as at specific field levels, such as: electrical and mechanical engineering, psychology, business studies, physics, medicine and computer science. The number of institutions and fields of study will be expanded in 2016 and each year to come.

In her blog, Liz Reisberg requested that universities “develop a more useful way to communicate how our work affirms quality” –U-Multirank provides that way. It widens the horizon to measure quality in an international and comparative context and it leaves the final definition of quality to the individual stakeholder who uses this tool. We have of course to keep in mind that performance data should support informed decisions. A certain performance outcome measured by indicators has to be discussed and explained in order to make measurement relevant.

U-Multirank published it’s second edition of rankings on March 30th, 2015. With over 1,200 higher education institutions included with 1,800 faculties and 7,500 study programmes across seven fields of study, U-Multirank is the largest global ranking – and the most comprehensive information system on universities in the world to date. 


Frans van Vught,  Center for Higher Education Policy Studies, The Netherlands 
Frank Ziegele, Centre for Higher Education, Germany

 

Next Story