You have /5 articles left.
Sign up for a free account or log in.

Regular readers may know that I can get cranky at times about our passion for demonstrating value and productivity in quantitative ways that encourage publishing for the sake of publishing, leaving little time for reflection or pursuit of ideas that won’t quickly Meaningful Metricsprovide a line for the CV. So I was a little skeptical when I sat down to read a new book, Meaningful Metrics: A 21st- Century Librarian’s Guide to Bibliometrics, Altmetrics, and Research Impact published by ACRL.

Turns out it’s a thoughtful and thorough guide to new and old ways to see what kind of audience a publication gets and how not to abuse those measures. While it’s addressed to librarians, it would be valuable to any scholar interested in these issues. The subjects covered – impact, bibliometrics, altmetrics, along with some special topics – each get a deep-dive explanation as well “in practice” chapters that explain how things work. My personal favorite is a chapter on “disciplinary impact” which does a fascinating job of exploring how scholars in different disciplines approach the idea of research impact, underscoring the idea that this is a social and cultural practice, not a one-size-fits-all formula.

The authors, Robin Chin Roemer of the University of Washington and Rachel Borchardt of American University, kindly agreed to answer some of my questions. They do a much better job than I could of explaining why we might want to know more about metrics – so long as they are meaningful.  

What motivated you to write this book? Why should librarians care?

Robin: We wrote the book for a couple of reasons - the most pressing of which was that we really wanted there to be a practical, concise guide to research metrics that would specifically bring the LIS community into the present altmetrics conversation. At the time we began the project, we really felt that there wasn’t a lot out there that spoke about these issues in a way that would make sense to LIS students and professionals who weren’t already somehow involved in either bibliometrics or scholarly communication in general. Added to this was the fact that we’d already received a lot of interest in our altmetrics work from professionals who recognized the potential relevance of metrics to their jobs, but weren’t sure how to go about furthering their skills and knowledge. It made us feel like we had a responsibility to do something for them, to give them confidence and help them take the proverbial “next step.” Writing a book seemed a good way to do it without sacrificing the caveats and logistics that inevitably pop up in the course of exploring a new area of research.

Rachel: As for why librarians should care - it should be said that many librarians already do care about scholarly metrics. Metrics are a topic that has already generated significant interest in many librarians’ institutions, from questions about their use and integration to requests to generate metrics reports for individuals, departments and other campus groups. However, as we discuss in the book, there are a lot of reasons why more librarians should care about scholarly metrics, and specifically about altmetrics. For instance, librarians often serve as important neutral voices within institutional discussions regarding sensitive topics like the definition of research impact. What’s more, librarians are widely recognized as change agents within the broader academic community, particularly in relation to issues of scholarly communication. These characteristics, combined with our unique understanding of the life cycle of information, compel us to become involved in today’s altmetrics conversations, advocating for better and more transparent metrics tools and lending our voice to the committees and groups that are currently discussing the future of metrics.

Journal Impact Factor remains a commonly-used metric. What's wrong with it? What do newer approaches have to offer?

Robin: It’s not so much that there’s something intrinsically “wrong” with Impact Factor as it is there’s something wrong with how Impact Factor has come to be used by many parts of academia - e.g. as an uncritical substitute for individual research quality, or even worse, researcher quality. When you really look at it, Impact Factor is just another way of saying “materials published three years ago by this journal have since averaged about this many of citations.” It’s a journal-level metric, for comparing the reach and influence of journals based on a definition and window of impact that is itself only a good fit for certain research areas. For this reason, its relevance therefore can’t be generalized across different fields, let alone different disciplines. Yet that’s exactly how it’s commonly wielded, which is both misleading and frustrating to many who are just trying to do excellent work and make a legitimate impact.

Rachel: Newer metrics, by contrast, offer different options for comparison, including many at the journal-level, albeit with their own strengths and weaknesses. Source Normalized Impact Per Paper (SNIP), for example, is one of the only “newer” citation-based metrics that is designed to allow for cross-disciplinary comparisons, taking into account the different citation cultures that are inherent within academic fields. Other examples of useful bibliometrics developed after the invention of Journal Impact Factor include cited half-life, Eigenfactor, and Scopus Journal Rankings (SJR). All of these metrics, however, are based on citation counts - the accurate collection and calculation of which is already controversial, particularly for those who work in fields outside of STEM. Altmetrics and usage metrics present the possibility of adding new measurements that go beyond citations into the equation. Some of these metrics correlate highly with citation counts, while others don’t, but our job is to figure out what these new measurements say about the value of research. Additionally, altmetrics open the doorway to creating new definitions of meaningful impact that extend beyond scholarly impact, which need to be taken into consideration. The National Science Foundation, for example, now asks applicants for the potential broader impact of research when reviewing grant proposals.

Do you worry at all that, as metrics are built into publishing platforms, publishers will constantly be watching for tweetability and engagement and be tempted to prioritize the most newsworthy, attention-grabbing research (the kind that ends up on Retraction Watch)? Do we run the risk that we're also promoting a culture that values proof of productivity over reflection?

Robin: In my mind, this is already a familiar problem across both academia and more general information fields like journalism. Publications that sound particularly explosive or topical are likely to get attention for reasons that do not necessarily have anything to do with research quality. That has the potential to boost not only that publication’s altmetrics but also its bibliometrics in the short term. The big difference to me is that altmetrics does not profess to be a measure of research quality by any means. It is more accurately a measure of an output’s attention - a measure that teaches us something about the potential influence, distribution, and life cycle of research that citations are not well equipped to do. Don’t we want to know if no one downloads a scholarly article that’s deeply insightful? I would, as would many librarians, researchers and socially-minded funding agencies. I’ll also point out that citation-based metrics are unreliable for measuring an article’s inherent research quality. It’s well known that researchers will sometimes cite sources uncritically, or in order to refute the quality of a previously published piece of research. These practices are precisely why so many academic disciplines make clear that all research metrics should always be understood in the context of qualitative factors.

Rachel: Just to give a concrete example of why bibliometrics are a poor substitute for qualitative reflection, consider the case of review articles. These articles consistently remain one of the most highly-cited areas of scholarly publication, to the extent that review journals often top Thomson Reuters’ Impact Factor rankings for individual disciplines. Does this pattern of citation generation mean that review articles are more impactful than the majority of other research? More likely, it means that many scholars find them to be useful as a way of introducing a large topic quickly in their research. Our own 2012 review-type article, From Bibliometrics to Altmetrics, has been cited nearly 50 times in roughly 39 months for exactly this reason. So yes, altmetrics does run the risk of promoting values that shouldn’t necessarily be the primary focus of research evaluation. However, until the broader practice of using metrics to substitute for deeper assessments of research value and quality is addressed,  we’ll continue to see research favored for reasons other than intrinsic value.

Kathleen Fitzpatrick's recent critique of Academia.edu applies to many commercial social platforms for scholars: "everything that’s wrong with Facebook is wrong with Academia.edu . . . perhaps we should think twice before committing our professional lives to it." What happens if the platforms we rely on for metrics are sold or disappear? What about privacy concerns when the business model of these platforms is collecting micropayments of person information that can be bundled and repurposed? Are there more durable, less invasive means for demonstrating the value of scholarship?

Rachel: Privacy concerns in association with social media platforms and, more broadly, with online sites that collect personal information, is a long-standing issue whose roots trace back much further than altmetrics. So to apply this broader issue only to systems that are now used to collect metrics is arguably a bit narrow. What’s more, it’s important to remember that, like Facebook, the use of research sites like Academia.edu is still at this point a matter of personal choice for academics. In other words, each of us can decide what the opportunity to expose a broader audience to our research is worth in terms of exposing certain personal information.

That said, there are some altmetrics tools such as ImpactStory that are trying to find a financially durable model that is transparent to end users and does not involve the selling or monetization of personal data. However, their model has struggled to find penetration that’s akin to tools like Academia.edu - which may suggest that academics are simply willing at this stage of the game to make the trade-off between control of their private information and convenient access to a robust and subscription-free set of tools. This is where librarians can continue to play a central role however -- in educating potential users of such tools about privacy issues in the short and long term, and in communicating with the toolmakers about the need for greater transparency. This is also something that the altmetrics community at large is tackling at present as part of the NISO Altmetrics Initiative. (Full disclosure: we both serve on committees as part of NISO’s initiative.)

Robin: I agree that the underlying concerns raised regarding Academia.edu are nothing new - although this doesn’t mean that I don’t agree with Kathleen Fitzpatrick that more academics shouldn’t be made aware of them, and of non-commercial networking tool alternatives. On the contrary! Still, it seems to me to be a bit premature to suggest that academics are “committing our professional lives” to a given platform simply because some of us may use it to highlight our publications or ask questions of our peers. What academics should do is be mindful about their level of investment in any tool or social platform marketed to scholars. After all, online tools change, merge, and disappear all the time whether they’re run by for-profit companies or non-profit companies. It was only three years ago that Mendeley announced its controversial acquisition by Elsevier after all. Librarians deal with these sort of shake-ups all the time as part of working with academic publishers. It’s one of the reasons our profession urges users of all kinds to be mindful of their data, make back-ups, and become vocal about core issues of scholarly communication. If you want to talk durable means of demonstrating the value of scholarship, it’s hard not to go right to the question of Open Access, and how to get scholars to commit to the financial models implied by that larger shift. Because in the end, someone has to pay for these new tools, repositories, and tracking systems; they don’t develop from nothing, or for free. Transparency across metric-generating networks is perhaps our next-best frontier, and I do believe that commercial entities are necessary players in the adoption (and indeed, practical development) of an eventual set of standards. 

Anything else you'd like to add?

Robin: Altmetrics is very powerful and has high potential for academia - but it is obviously not a “catch all” for all of academia’s problems, concerns, or needs. Rather, different altmetrics are good for different things, and learning how to evaluate their strengths and weaknesses is a key skill in wielding them wisely. Are altmetrics right for me to use, as an individual researcher/administrator/professional? It’s a complicated question, but it’s a great question, too. And I especially think it’s something librarians are going to have to pay serious attention to given the direction of scholarly communication. If we’re going to support 21st century research and researchers, it just makes sense.

Rachel: Different academic disciplines have highly different scholarly communication cultures, and different ways of approaching this broader question of what it means to be impactful. Until recently, we had relatively few tools to measure this concept of impact. While altmetrics, like bibliometrics, come with a set of limitations, it does give us new ways to reconsider and measure impact in a way that wasn’t possible before.

I care about this topic because I think that librarianship, like many non-STEM disciplines, is a field that can benefit from the integration of altmetrics into the larger discussion of impact. Librarianship is a relatively low-citing field, and so much of our meaningful work is taking place in conferences, in online forums, and in other venues that escape bibliometric measure. Ultimately, I think it is up to each discipline to decide the role that bibliometrics and altmetrics should play in evaluating their scholarly output, rather than letting evaluators and evaluative bodies make this decision on our behalf.

Many thanks to Robin and Rachel for indulging my questions!

Next Story

Written By