You have /5 articles left.
Sign up for a free account or log in.

It is that season when ranking entities announce their “findings” on the comparative stature of the world’s universities. Almost certainly, the “premier” universities remain at the top and the rest are relegated to the bottom—African universities in particular. The “rankers” go about their business some with audacity but too often without sufficient concern for veracity, authenticity or integrity in their methodologies or (especially in the cae of Africa) lack of data.

Facts vs. Perceptions

For the last three years, the University of Kwazulu-Natal in South Africa stood at the top of the country in academic productivity as measured by the Department of Higher Education and Training. The Department undertakes the task using parameters that meticulously measure research and academic outputs.

Yet, according to the newly released QS rankings—which allocates 60 percent of the criteria to academic reputation—the University now stands below six South African universities. This points to a glaring tension between data and dubious assessment based on reputation.

Building Reputation: Unpacking the Numbers

The QS ranking is ostensibly a mix of survey responses and data across six indicators, compiled and weighted to formulate a final score. It claims that over 70,000 academics and 30,000 employers contributes to the rankings through the QS global surveys. QS states that it analyzes 99 million citations from 10.3 million papers before 950 institutions are ranked.

Times Higher Education states that their methodology is a unique piece of research that involves “questionnaires [that] ask over 10,500 scholars from 137 countries about the universities they perceive to be best for teaching and research.” It claims that the Academic Reputation Survey “uses United Nations data as a guide to ensure that the response coverage is as representative of world scholarship as possible”. They go on to state that where countries were over or underrepresented, the responses were weighted to “more closely reflect the actual geographical distribution of scholars” throwing more uncertainty on the changing parameters of the rankings.

There appears to be a conflation of “the world of scholarship” with “geographical distribution of scholars” without defining clearly what a “scholar” or “scholarship” is. China, India, and Brazil may have the largest number of “scholars” and by that account more scholarship, yet however they barely make it to the top in the rankings.

According to the Times, only 2 percent of the survey participants were Africans, presumably located on the continent. As about 50 percent of research in Africa is undertaken in South Africa, one presumes that the number of survey participants in the rest of Africa thus tapers off to one percent. Around one hundred academics in Africa, outside of South Africa, would have participated in the reputation index “evenly spread across academic disciplines”. Thus, for the 11 disciplines considered in the Times rankings, that would mean about 10 responses per discipline from Africa.

Rankings Indices

Indeed, rankings are largely about reputation. According to QS reputation is a calculation with 40 percent from academics and 20 percent from employers. An institution improves its position in the rankings if it scores big in these two indices based on perception.

The reasons why the world, especially Africa, would be well served to ignore these rankings are numerous. Let’s consider the QS ranking that puts considerable weight on student-faculty ratio. Without exception, the African higher sector is expanding massively. This has created very high student-staff ratios forcing African institutions to face difficult choices if improving their standing in the rankings is important—either freeze expansion or raise the number of academics. Increasing the number of academics would require massive investments, creative policies and long-term commitments, that few institutions are positioned to contemplate.

Another parameter used in the rankings is international faculty ratio and international student ratio. South Africa and Botswana and to some extent Namibia (in Sub-Saharan Africa, SSA) are the only countries that attract international faculty, mostly from the continent. This remains a dream for the rest of Africa.

Likewise, improving the percentage of international students is another rankings criteria used by QS and others.  The number of African countries that attract international students is very small and includes South Africa, Ghana, Kenya and Uganda. Virtually all of these “international” students come from other African countries with the exception of South Africa. Even when students enroll from overseas, it would only be for a semester or two.

The nature of these rankings insures that the institutions at the top are mostly from the US, year in and year out. In reviewing the ranking published by the Times Higher Education, the same could be said about those on the list at the “middle” and “lower” end where some may have moved up a notch and others moved down a notch.

Emphasizing reputation-based criteria does not affect the standing of those established at the top. These institutions tend to be immune to strikes, financial strain, internal strife, or other critical challenges faced by institutions in the developing world.

Manipulating the Rankings

Some enterprising entities, calling themselves data analysts are already emerging to “help” African institutions do better in the rankings. One flagship university in East Africa is suspected of pursuing that approach, for which it was reported paid a hefty service fee.

The aggressive positioning of these entities masquerading as service providers—often at major events where senior institutional administrators meet—is nothing more than a swindle. Institutions should use their limited resources effectively rather than pursue shortcuts to an improved ranking.

The Option of Withdrawal

Over a year ago, I got a phone call from a vice-chancellor at a university in South Africa who suggested coordinating a withdrawal from the rankings by the country’s institutions. The proposal was to encourage all universities in the country to refuse to participate and instead to dedicate all their resources, energy and time to more relevant concerns. Rhoades, one of the premier universities in South Africa, already refuses to participate in the rankings, so a precedent already exists.

An international roundtable on rankings—supported by the Peter Wall Institute for Advanced Studies at the University of British Columbia—took place in May 2017 in Vancouver. The roundtable deliberated on the scope and significance of university rankings and proposed concrete actions and interventions on the issue in the future.


As stated by Philip G. Altbach, an internationally-recognized scholar on international higher education, rankings are not disappearing anytime soon. As more rankings join the fray, they are more likely to generate more buzz to insure their survival and influence.

Numerous ranking entities generate multiple findings related to the reputation of institutions. As the former Vice Chancellor of Rhoades University, Saleem Badat, stated, “Rankings, in their current form, serve no meaningful educational or social purpose,” but the tempest in the rankings teapot continues undeterred.

Next Story

Written By