For some young professors, any evaluative tool called the “Tenurometer” (i.e. tenure-o-meter) is sure to turn some heads.
Which is the point, says Filippo Menczer , associate professor of computer of informatics and computer science at Indiana University and co-creator of the Tenurometer , a cheekily named tool (still in beta phase) designed to measure scholars' impact on their fields by counting how much they have contributed to the literature and how frequently those articles have been cited.
The use of citation-based “impact” metrics to help departments assess their professors’ publishing performance is nothing new. Across higher education, there has been a push to increase accountability by analyzing the oceans of metadata that come from storing information on computer servers, where it can be sorted like never before. And as Google Scholar  and others have emerged to put huge, searchable repositories as anyone’s fingertips, programs such as Publish or Perish — Tenurometer’s closest ancestor — have been close behind, giving professors and their bosses ways to quantify who is more prolific than whom.
The Tenurometer, a project by Menczer and Diep Thi Hoang , a graduate student at Indiana’s School for Computing and Informatics, uses the same method as Publish or Perish to find out this out within various disciplines: the “h-index .” Founded in 2005 by physicist Jorge E. Hirsch, the h-index combines researchers' scholarly output with the influence of their work on subsequent research in the field. Hirsch proposed  it to be the best single criterion for determining promotions.
Like Publish or Perish, Tenurometer also draws on the Google Scholar index (though it does so as a browser extension, not an application, meaning it is compatible with Macs). What the Tenurometer adds, Menczer says, is an additional measure, called the “universal h-index .” As research becomes increasingly cross-disciplinary, it is not uncommon for a department to be considering candidates for jobs, promotions, and tenure appointments who hail from different disciplinary backgrounds or whose influence is evident in other disciplines.
“We have computer scientists, and physicists, and we have social scientists, and people from many different backgrounds, who publish in lots of different areas,” Menczer says of the informatics program at Indiana. “…We have very different citation patterns, many different communities, with different publishing traditions, and it is very different to compare.” In other words, it would be wrongheaded to compare the h-indexes of a sociologist and a computer scientist.
The universal h-index, however, controls for differences in the publishing traditions of each field, as well as the amount of research each scholar has had to compete with in order to make an “impact,” Menczer says. Still, “We don’t intend for it to be used by academic departments as a single element of decisions,” he adds, pointing to the project’s stated caveats against giving the h-index top credence in tenure decisions. “That would be a pretty bad thing.”
Really, Menczer says the implications of the provocatively named tool as far as comparing the accomplishments of researchers in different disciplines are not as exciting as the ability to help academics map how disciplines are bleeding into one another. As professors — curious, perhaps, about their own h-indexes and those of their colleagues — and administrators query the Google Scholar database through the Tenurometer interface, the tool records their activities, producing metadata that could allow observers to "begin to see the dynamics of the emergent collaborations across fields,” he says.