At the recent INQAAHE (International Network of Quality Assurance Agencies in Higher Education) bi-annual conference in Chicago, there were more than a few disparaging references to the rankings. The conference brought together representatives of national quality assurance agencies from all over the world. For individuals who dedicate their working hours (and most likely endless additional hours of reflection and research) to the quest of defining, evaluating and pursuing “quality” for higher education, the rankings are an unwelcome distraction indeed.
I suspect that the frustration of the attendees at the INQAAHE conference is that the rankings too often become a surrogate for quality. No matter how many articles appear in the media or in academic journals explaining the rankings— their flaws, their limitations, etc.—stakeholders outside of the academy will continue to reference them to conclude which universities are “the best”. This is all the more frustrating for those of us familiar with the criteria that shape the results of rankings as they are often not relevant to the needs of the individuals and organizations that use them.
Yet annual release of international rankings from the Academic Ranking of World Universities in Shanghai, the Times Higher Education, and QS continue to attract a lot of attention. The problem, rarely acknowledged, is that the rankings fill a need. With an estimated 16,000 institutions of higher education in the world (more than 3,000 universities in the US alone), some means of making distinctions among them is required. Pity the student who has private funding or a scholarship that allows for study anywhere in the world and has to begin to sort through the overwhelming number of options. The employer trying to select among several job applicants with foreign degrees and credentials from unfamiliar institutions is often equally flummoxed. Governments supporting large scholarship programs frequently rely on rankings to determine where sponsored students can study, often using multiple rankings to determine an “acceptable” placement. Rankings provide a quick and easily accessible reference and quality assurance schemes do not.
The rankings are an efficient mechanism to decrease the total of global institutions to a more manageable number, no matter that the results exclude excellent institutions because they do not fit the protocol used. I will not review the way rankings are constructed here as this analysis has been done in the excellent work of Ellen Hazelkorn and others. Suffice it to say that with the exception of the extensive categories provided by US News & World Report, the rankings tend to favor elite, well-funded, research universities and a first-rate university education is most certainly not limited to this type of institution.
The quality assurance schemes operating in most nations provide much more detailed (and relevant) information. These programs evaluate degree programs and institutions, one at a time, measuring performance against institutional mission and within a national context. The process leading to accreditation includes a long, detailed self-assessment followed by an evaluation by qualified external evaluators. The resulting, often technical, reports provide very useful information. But to the general public, quality assurance schemes only provide a yes or no answer. In other words, the detailed assessment of an institution leads to “yes,” accredited or “no,” not accredited. And “yes” covers a lot of diversity in the higher education environment with no mechanism for comparison. It is unlikely that prospective students or employers will dedicate the time to reading the evaluation reports (if they are even publicly available) to learn the finer details of an institution’s strengths and uniquenesses.
I am not advocating a graded system of accreditation, although some countries do this, only suggesting that while national and international quality assurance agencies provide a much more careful and in-depth assessments of quality than those provided by any of the rankings, their conclusions are less useful because they are less comprehensible to key stakeholders.
We have to recognize that there is a general need to make sense of the diverse global higher education environment and, for the moment, the rankings provide a more expedient tool for measuring and comparing institutions than the accreditation agencies. If the higher education community (including quality assurance agencies) hopes to diminish the influence of rankings, then we will have to develop a more useful way to communicate how our work affirms quality.
Read more by
You may also be interested in...
Opinions on Inside Higher Ed
Inside Higher Ed’s Blog U
What Others Are Reading