After a successful pilot, JSTOR is launching its Register & Read program, which lets anyone read up to three articles from 1,200 of its journals every two weeks in exchange for demographic information.
In a passage surely written with tongue in cheek, Friedrich Nietzsche states that a scholar of his era would consult 200 volumes in the course of a working day. That’s far too many, he suggests: a symptom of erudition’s decay into feeble bookwormery and derivative non-thinking. “During the time that I am deeply absorbed in my work,” he says, “no books are found within my reach; it would never occur to me to allow anyone to speak or even to think in my presence.” A noble modus vivendi, if not quite an admirable one, somehow.
Imagine what the philosopher would make of the 21st century, when you can carry the equivalent of the library of Alexandria in a flash drive on your keychain. Nietzsche presents the figure of 200 books a day as “a modest assessment” – almost as if someone ought to do an empirical study and nail the figure down. But we’re way past that now, as one learns from the most recent number of Against the Grain.
ATG is a magazine written by and for research librarians and the publishers and vendors that market to them. In the new issue, 10 articles appear in a section called “Perspectives on Usage Statistics Across the Information Industry.” The table of contents also lists a poem called “Fireworks” as part of the symposium, though that is probably a mistake. (The poem is, in fact, about fireworks, unless I am really missing something.)
Some of the articles are a popularization -- relatively speaking -- of discussions that have been taking place in venues with titles like the Journal of Interlibrary Loan, Document Delivery & Electronic Reserves and Collections Management. Chances are the non-librarians among you have never read these publications, or even seen them at a great distance, no matter how interdisciplinary you seek to be. For that matter, discussing the ATG articles at any length in this column would risk losing too many readers. They are peer communications. But the developments they address are worth knowing about, because they will undoubtedly affect everyone’s research, sooner or later, often in ways that will escape most scholars’ notice.
Most of us are aware that the prominence and influence of scholarly publications can be quantified, more or less. The Social Science Citation Index, first appearing in 1956, is an almost self-explanatory case.
As an annual list of the journal articles where a given paper or book has been cited, SSCI provides a bibliographical service. Counting the citations then yields bibliometric data, of a pretty straightforward kind. The metric involved is simplicity itself. The number of references to a scholarly text in the subsequent literature, over a given period of time, is a rough and ready indicator of that text’s influence prominence during said period. The reputation of an author can be similarly quantified, hashmark style.
A blunt bibliometric instrument, to be sure. The journal impact factor is a more focused device, measuring how often articles in a journal have been cited over a two-year period relative to the total number of articles in the same field, over the same period. The index was first calculated in the 1970s by what is now Thompson Reuters, also the publisher of SSCI. But the term “journal impact factor” is generic. It applies to the IDEAS website’s statistical assessment of the impact of economic journals, which is published by the Research Division of the Federal Reserve Bank of St. Louis. And there's the European Reference Index for the Humanities, sponsored by European Science Foundation, which emerged in response to dissatisfaction with “existing bibliographic/bibliometric indices” for being “all USA-based with a stress on the experimental and exact sciences and their methodologies and with a marked bias towards English-language publication.”
As the example of ERIH may suggest, bibliometric indices are not just a statistical matter. What gets counted, and how, is debatable. So is the effect of journal impact factors on the fields of research to which they apply – not to mention the people working in those fields. And publication in high-impact journals can be a career-deciding thing. A biologist and a classicist on a tenure committee will have no way of gauging how good the candidate’s work on astrophysics is, as such. But if the publications are mostly in high-impact journals, that’s something to go by.
The metrics discussed in the latest Against the Grain are newer and finer-grained than the sort of thing just described. They have been created to help research libraries track what in their collections is being used, and how often and intensively. And that, in turn, is helpful in deciding what to acquire, given the budget. (Or what not to acquire, often enough, given what’s left of the budget.)
One contributor, Elizabeth R. Lorbeer, associate director for content management for the medical library at the University of Alabama at Birmingham, says that the old way to gauge which journals were being used was to look at the wear and tear on the bound print volumes. Later, comparing journal-impact factors became one way to choose which subscriptions to keep and which to cancel. But it was the wrong tool in some cases. Lorbeer writes that she considered it “an inadequate metric to use in the decision-making process because sub-discipline and newer niche areas of research were often published in journals with a lower impact factor.”
From the bibliometric literature she learned of another statistical tool: the immediacy index, which measures not how often a journal is cited, but how quickly. In some cases, a journal with a low impact factor might have a higher immediacy index, as would be appropriate for work in cutting-edge fields.
She also mentions consulting the “half-life” index for journals – a metric as peculiar, on first encounter, as the old “count the footnote citations” method was obvious. It measures “the number of publication years from the current year which account for 50 percent of current citations received” of articles from a given journal. This was useful for determining which journals had a long-enough shelf life to make archiving them worthwhile.
Google Scholar is providing a number of metrics – the h-index, the h-core, and the h-median – which I shall mention, and point out, without professing to understand their usefulness. Lorbeer refers to a development also covered by Inside Higher Ed earlier this year: a metric based on Twitter references, to determine the real-time impact of scholarly work.
One day a Nietzsche specialist is going to be praised for writing a high-twimpact paper, whereupon the universe will end.
Other contributions to the ATG symposium paint a picture of today’s research library as a mechanism incessantly gathering information as well as making it available to its patrons – indeed, doing both at the same time. Monitoring the flow of bound volumes in and out of the library makes it relatively easy to gauge demand according to subject heading. And with digital archives, it’s possible to track which ones are proving especially useful to students and faculty.
A survey of 272 practicing librarians in ATG’s subscriber base, conducted in June of this year, shows that 80 percent “are analyzing [usage of] at least a portion of their online journal holdings,” with nearly half of them doing so for 75 to 100 percent of those holdings. It’s interesting to see that the same figure – 80 percent – indicated that “faculty recommendations and/or input” was used in making decisions about journal acquisitions. With book-length publications entering library holdings in digital form, the same tools and trends are bound to influence monograph acquisition. Some of the articles in the symposium indicate that it’s already happening.
Carbon-based life forms are still making the actual decisions about how to build the collections. But it’s not hard to imagine someone creating an algorithm that would render the whole process cybernetic. Utopia or nightmare? I don't know, but we're probably halfway there.
Everyone talks about the amount of money spent on college football, superstar coaches, television contracts and stadiums. They worry about an imbalance between the expense of university sports programs and the challenge of funding the academic enterprise. These real concerns provoke often-impassioned responses from those who defend or attack the current state of intercollegiate athletics in America.
Unfortunately, much of the noise tends to focus on extreme examples, spectacularly paid coaches of whom we may have only a dozen or so out of the hundreds of college sports personnel, super-sized stadiums and sports department budgets when most sports programs operate on a more modest scale. The targets are attractive because the celebrity status of big-time football and basketball fill pages of newspapers and specialty magazines, appear endlessly on multiple television channels, and enjoy the attention of rabid fans.
Yet college sports is a complicated enterprise that serves many interests at institutions public and private, large and small. Sports are a pervasive part of American culture, and like other high-profile activities (such as finance, real estate or banking), there are bad actors, people of questionable integrity, and errors of commission and omission that attract justifiable outrage and response.
Those of us who live in the academic world, however, sometimes have trouble sorting out the real impact of college sports on our lives. We can understand this competitive world better if we separate the institution of intercollegiate athletics into its various parts, including the engagement of students, the lives of student-athletes (both celebrity performers and regular participants), the involvement of alumni and public, and the financial consequences of sustaining these programs.
Of these, the financial elements are most accessible thanks to data collected by the NCAA and required by various federal reporting rules. Money in universities is always important, especially in these difficult economic times, and we looked for a way to index the university’s cost of intercollegiate athletics to the institution's budget.
Sports expenses are funded from earned revenue (tickets, television, sales, gifts and similar revenue generated by the athletic activity itself), and from institutional revenue available for any purpose (student fees and university funds). The institutional revenue is a subsidy for an enterprise that in the best of all possible worlds should earn its own way in much the same fashion as other university nonacademic enterprises such as food services, bookstores, parking, and housing.
All but a few universities, however, subsidize athletics from student fees and general university revenue. We should ask how significant that subsidy is within the general framework of the university's academic activities. With some sense of the relationship between subsidy and academics, we can assess when sports consume too much of our academic resources.
We could compare the sports subsidy to the cost of a college of business perhaps, or to the cost of an honors program. Each university's organization is substantially different, however, making these units hard to compare.
Libraries, especially for research universities, are stable, standard enterprises central to the work of the university in a continuing way. In addition, the Association of Research Libraries (ARL) has maintained standard data on library expenses, revenue, and budgets (as well as other statistics of significance) for many decades. We anticipated that a comparison of the athletics subsidy to the expenditures on the research university's library could provide a useful reference for understanding the wide variation in the financial impact of college sports on academic institutions.
Aiding in this illustration are the data compiled by USA Today on college sports finances, although its data involve only Division I public institutions whose information is available under freedom of information rules. Private universities prefer we not see their numbers.
If we take the 64 Division I public research university members of the Association of Research Libraries (all major research universities of varying size and complexity) and compare their athletic subsidies to the cost of their libraries as reflected in the ARL data, we can get a useful distribution of the impact of sports subsidies on academic enterprises. These research universities maintain libraries to support their instructional and research programs, compete for the best students and faculty, compete as well for the external funding that makes research at this level possible, and require strong libraries for their success.
The size of the libraries reflects an institutional commitment to the academic enterprise, while the sports subsidy for the sports program reflects a commitment to the nonacademic competitiveness of athletics. The subsidy also represents an institutional investment that the institution could have allocated to academic enterprises but instead uses to pay part of the cost of the intercollegiate athletic program, a nonacademic enterprise.
The table below clearly illustrates that the impact of college sports on the academic enterprise varies widely from those institutions whose sports programs require no subsidy (and therefore have no detrimental impact on the academic enterprise) to those sports programs whose subsidy reaches one and a half times the total library budget, clearly a major impact.
These varying impacts are not the result of dramatic changes over time in the library expenditures (which have followed the general trend of university budgets throughout recent years). The impact is the consequence of a college sports environment that requires growing expenses to sustain competitive or even functional programs at the Division I level. When the university must subsidize the athletic program, it indicates that sports at that institution do not compete well enough to earn sufficient revenue from attendance, television, sponsorships, alumni and donors, and must spend university money to stay within the competitive context of Division I.
The wide variation in subsidy also indicates that if the revenue of public universities continues to decline, some institutions may find their level of subsidy for athletics at the expense of academics too high for the other benefits sports provides. That could prompt a change in competitive division within the NCAA, or the elimination of a variety of high-cost sports.
However, those of us who have lived in various institutions know that while talk of curtailing expenditures on sports is common and enthusiastic among many faculty and some outside commentators, the constituencies for college sports among alumni, trustees, elected officials, and fans are passionate at unbelievable levels. Trustees, alumni and elected officials, in addition to fans of all kinds, want their sports regardless of the subsidy required at the expense of the academic enterprise.
Perhaps along with the other financial requirements for participation in the NCAA Division I, we might expect such programs to limit their institutional subsidies to less than a third of their library budget. That may, however, be asking too much.
Subsidy of College Athletics (2010-11) and
Library Expenditures (2008-9) Division I Public Research Universities
Total Library Expenditures
Total Sports Subsidy
Ratio Subsidy to Library
University of Delaware
University of Massachusetts at Amherst
Kent State University
State University of New York at Stony Brook
University of California at Davis
University of Houston
State University of New York at Albany
State University of New York at Buffalo
Colorado State University
Southern Illinois University at Carbondale
University of California at Riverside
Washington State University
University of New Mexico
University of Cincinnati
University of Colorado at Boulder
University of Hawaii
University of Maryland at College Park
University of California at Santa Barbara
University of Connecticut
University of California at Irvine
Georgia Institute of Technology
University of Louisville
University of Illinois at Chicago
Florida State University
Arizona State University
University of Utah
University of Virginia
Oklahoma State University
University of Alabama
University of Arizona
Texas Tech University
University of North Carolina at Chapel Hill
University of California at Berkeley
University of Minnesota
University of Wisconsin at Madison
Iowa State University
University of Missouri at Columbia
University of Florida
University of Oregon
University of Kansas
University of Georgia
Michigan State University
University of South Carolina
University of Illinois at Urbana-Champaign
Indiana University at Bloomington
University of Washington
University of California at Los Angeles
University of Tennessee at Knoxville
North Carolina State University
University of Kentucky
University of Iowa
University of Michigan
Texas A&M University
Louisiana State University
Ohio State University
Pennsylvania State University
University of Nebraska at Lincoln
University of Oklahoma
University of Texas at Austin
Sports subsidy and library budget data refer to public Division I universities whose libraries are members of the Association of Research Libraries.