You have /5 articles left.
Sign up for a free account or log in.
As more universities move toward a corporate model of organization, faculty are asked to prove their worth through “impact factors.” “Impact” is most commonly measured in the number of citations a scholar receives. It’s actually a fairly shallow way of measuring “impact,” and an overreliance on these kinds of measures, particularly for tenure cases, will only serve to hurt universities and students.
Relying upon the number of citations a scholar has—particularly early in her or his career—will miss the real “impact” a professor may have. Citation counts are just that—but there is no way to know how or why a work was cited unless one were to track down each one and analyze the context of the citation. For example, if someone’s work is cited as “possibly the worst example of…” or “a sloppy example of…” it will count as a citation. But does this measure impact? I’ve seen cases where work is mis-cited (my own, by a grad student who clearly hadn’t read it)—is that impact?
Further, when do we start measuring the impact of a work? The minute a work is in print? Certainly there are well-established scholars whose work is eagerly awaited, and cited immediately upon publication. But, for the most part, citations are going to take a while to crop up—if something is cited in a journal article, that article was probably in the works for two years, at least. For a typical tenure track professor, you have five years to make it. Say you publish an article every year. When will those be cited? And how often?
Let’s not forget that there is also a feedback loop involved. Citations generally beget citations—we often tend to look at the citations in an article or book we find useful and follow up on those (it’s a lazy way of working, but it happens). You can see the problem here—the more this happens, the smaller the cited pool becomes, and after a few iterations, you have a “canon.” We need to find a meaningful way to find and promote new and interesting work that lies outside the mainstream.
So, is it fair or reasonable to judge a junior faculty member by “impact factors” based solely on citations? I offer impact factors, in addition to scholarship (cited or not), that make more sense for those who haven’t yet crossed the line into academic stardom but looking to prove their worth (say, for tenure):
- How much of an impact does this scholar bring to the classroom? Do students flock to that professor? Has that professor changed lives? Does that professor bring her or his own research into lectures? Does that professor bring new and innovative teaching to the classroom?
- How much of the professor’s own research is reflected in her or his teaching? How much does this professor involve students in her or his research?
- How often is this professor sought out on campus to serve on panels? To give talks? To guide students?
- How much involvement does this professor have within the larger profession? Is she or he sought out to review articles? To serve on conference boards? Does this professor write for a blog? How present is this professor on (professional) social media?
- Does this professor have an impact on the overall life of campus?
Wait, a minute, you might say. Those should be or already are the stated criteria for tenure at most universities—scholarship, teaching and service. But the reality is, in those universities focused heavily on rankings, university administrators aren’t that interested in what they perceive as “intangible” impact factors — citation counts are more easily obtained and seem to present empirical data on impact. But it’s a lazy measure, and one that may lead to the loss of professors who impact the university and the field in many other ways.