You have /5 articles left.
Sign up for a free account or log in.
The analysis of citations -- examining what scholars and scientists publish for the purpose of assessing their productivity, impact, or prestige -- has become a cottage industry in higher education. And it is an endeavor that needs more scrutiny and skepticism. This approach has been taken to extremes both for the assessment of individuals and of the productivity and influence of entire universities or even academic systems. Pioneered in the 1950s in the United States, bibliometrics was invented as a tool for tracing research ideas, the progress of science, and the impact of scientific work. Developed for the hard sciences, it was expanded to the social sciences and humanities.
Citation analysis, relying mostly on the databases of the Institute for Scientific Information, is used worldwide. Increasingly sophisticated bibliometric methodologies permit ever more fine-grained analysis of the articles included in the ISI corpus of publications. The basic idea of bibliometrics is to examine the impact of scientific and scholarly work, not to measure quality. The somewhat questionable assumption is that if an article is widely cited, it has an impact, and also is of high quality. Quantity of publications is not the main criterion. A researcher may have one widely cited article and be considered influential, while another scholar with many uncited works is seen as less useful.
Bibliometrics plays a role in the sociology of science, revealing how research ideas are communicated, and how scientific discovery takes place. It can help to analyze how some ideas become accepted and others discarded. It can point to the most widely cited ideas and individuals, but the correlation between quality and citations is less clear.
The bibliometric system was invented to serve American science and scholarship. Although the citation system is now used by an international audience, it remains largely American in focus and orientation. It is exclusively in English -- due in part to the predominance of scientific journals in English and in part because American scholars communicate exclusively in English. Researchers have noted that Americans largely cite the work of other Americans in U.S.-based journals, while scholars in other parts of the world are more international in their research perspectives. American insularity further distorts the citation system in terms of both language and nationality.
The American orientation is not surprising. The United States dominates the world’s R&D budget -- around half of the world’s R&D funds are still spent in the United States, although other countries are catching up, and a large percentage of the world’s research universities are located in the United States. In the 2005 Times Higher Education Supplement ranking, 31 of the world’s top 100 (research-focused) universities were located in the United States. A large proportion of internationally circulated scientific journals are edited in the United States, because of the size and strength of the American academic market, the predominance of English, and the overall productivity of the academic system. This high U.S. profile enhances the academic and methodological norms of American academe in most scientific fields. While the hard sciences are probably less prone to an American orientation and are by their nature less insular, the social sciences and some other fields often demand that authors conform to the largely American methodological norms and orientations of journals in those fields.
The journals included in the databases used for citation analysis are a tiny subset of the total number of scientific journals worldwide. They are, for the most part, the mainstream English-medium journals in the disciplines. The ISI was established to examine the sciences, and it is not surprising that the hard sciences are overrepresented and the social sciences and humanities less prominent. Further, scientists tend to cite more material, thus boosting the numbers of citations of scientific articles and presumably their impact.
The sciences produce some 350,000 new, cited references weekly, while the social sciences generate 50,000 and the humanities 15,000. This means that universities with strength in the hard sciences are deemed more influential and are seen to have a greater impact -- as are individuals who work in these fields. The biomedical fields are especially overrepresented because of the numbers of citations that they generate. All of this means that individuals and institutions in developing countries, where there is less strength in the hard sciences and less ability to build expensive laboratories and other facilities, are at a significant disadvantage.
It is important to remember that the citation system was invented mainly to understand how scientific discoveries and innovations are communicated and how research functions. It was not, initially, seen as a tool for the evaluation of individual scientists or entire universities or academic systems. The citation system is useful for tracking how scientific ideas in certain disciplines are circulated among researchers at top universities in the industrialized countries, as well as how ideas and individual scientists use and communicate research findings.
A system invented for quite limited functions is used to fulfill purposes for which it was not intended. Hiring authorities, promotion committees, and salary-review officials use citations as a central part of the evaluation process. This approach overemphasizes the work of scientists -- those with access to publishing in the key journals and those with the resources to do cutting-edge research in an increasingly expensive academic environment. Another problem is the overemphasis of academics in the hard sciences rather than those in the social sciences and, especially, the humanities. Academics in many countries are urged, or even forced, to publish their work in journals that are part of a citation system -- the major English-language journals published in the United States and a few other countries. This forces them into the norms and paradigms of these journals and may well keep them from conducting research and analysis of topics directly relevant to their own countries.
Citation analysis, along with other measures, is used prominently to assess the quality of departments and universities around the world and is also employed to rank institutions and systems. This practice, too, creates significant distortions. Again, the developing countries and small industrialized nations that do not use English as the language of higher education are at a disadvantage. Universities strong in the sciences have an advantage in the rankings, as are those where faculty members publish in journals within the citation systems.
The misuse of citation analysis distorts the original reasons for creating bibliometric systems. Inappropriately stretching bibliometrics is grossly unfair to those being evaluated and ranked. The “have-nots” in the world scientific system are put at a major disadvantage. Creative research in universities around the world is downplayed because of the control of the narrow paradigms of the citation analysis system. This system overemphasizes work written in English. The hard sciences are given too much attention, and the system is particularly hard on the humanities. Scholarship that might be published in “nonacademic” outlets, including books and popular journals, is ignored. Evaluators and rankers need go back to the drawing boards to think about a reliable system that can accurately measure the scientific and scholarly work of individuals and institutions. The unwieldy and inappropriate use of citation analysis and bibliometrics for evaluation and ranking does not serve higher education well -- and it entrenches existing inequalities.