The latest PISA results have kicked up quite a storm in Brazil. I suspect there have been similar responses elsewhere. The Brazil debate seems to focus on the validity of the data and sample and whether the results capture an accurate picture of the capabilities of Brazilian students.
Each time a new ranking is published they “kick up” similar national storms, particularly when a country or university is not happy with where they place on the list. Additionally, the rankings have been sliced and recreated in so many different ways (by region, by reputation, by research output, by institution’s age, by a country’s stage of economic development) that it makes one’s head spin. Although never a fan of rankings, each new variation diminishes my confidence that they are of any value whatsoever. Still, international comparisons divert attention from the main purpose of the educational enterprise and often influence national policy; in that sense, they are dangerous.
One of the aspects of these international comparisons that bothers me most is that they are used to examine one’s competitive position, not one’s educational achievement. And we are judging competitive position based on dubious criteria. What we really need to be doing is tracking educational outcomes and these international comparisons only serve as distraction from that objective.
One of the themes I revisit over and over again in my work on quality assurances is the fact that no one has managed to develop a specific definition of quality that can be applied widely. Or at least a definition that can be applied usefully across a diverse educational landscape. “Outcomes” is an equally slippery concept, but like “quality,” extremely important. As I have noted in a previous blog , sadly, we resort to measuring what we can measure, not necessarily what we need to find out.
As has been said many times, the missions of thousands of institutions worldwide vary by their respective realities. Rankings do not and cannot take this into consideration; testing cannot take this into consideration—yet we use the results to compare ourselves to others and, in some countries, to determine the allocation of resources.
So what is the value of international comparisons? In an ideal world they should be useful to benchmark against an internationally-valid measurement. But have we have not come close to useful, internationally-valid standards, so we should be careful about how we are influenced by any of this.