The dominion of open educational resources is apparently looming large, if one were to judge by a blog thread touched off with a panel discussion at a recent Knewton event. David Wiley, participating in the panel, made the bold claim that “in the near future, 80 percent of textbooks would be replaced by OER content.” Jose Ferreira responded critically to that view a few days later with a blog post, to which Wiley offered a dissenting reply. Michael Feldstein then weighed in with a dissenting perspective of his own.
It’s a spirited and fruitful discussion; well worth a read. Their comments, though, didn’t tackle what I’ve come to see as the core issue for the OER movement, a foundational assumption that has crimped its progress. The assumption holds that because open-source educational content is like open-source software -- in that it’s free content that you can chop up, remix, and share with anyone -- its application and uses should follow in a similar way.
The short history of the two movements makes clear that this is not the case. As David Wiley points out, the first openly licensed educational materials were published more than 15 years ago, around the time that Linux led the movement of open-source software (OSS) into the mainstream. So why did one open-source movement take off as the other tarried on the margins, championed only by the most stalwart advocates?
While Linux has long been part of standard practice, and our daily computing lives would be unthinkable without open-source software, more than 90 percent of faculty textbook adoptions in the U.S. are still locked-down, expensive commercial materials. Most don’t doubt the unsustainability of the present course (including most publishers), but it’s also plain to see that the OER movement had not yet offered a truly satisfying alternative. The failure of OER to become mainstream at this point is only underscored by the myriad forces working in its favor: economic pressures, greater administrative accountability, government oversight and budget cuts, and a truly broken publisher model.
A clear reason for the different trajectories is the commercial support that OSS has enjoyed, and that OER has not. Contrary to the common view that OSS has advanced largely through loosely organized communities of volunteers, it’s actually often strongly supported through private enterprise. More than 80 percent of the contributions to Linux, for example, come today from companies like Google and Samsung. But the success of OSS isn’t simply through commercial appropriation. Instead, companies were able to support OSS because they were building on an already-present foundation of voluntarism in the hacker community. While a volunteer community of course exists in OER, it does not have the depth and breadth of its OSS counterpart. The voluntarism of the hacker community does not, in other words, map well onto the community of academic instructors.This situation isn’t an accident of history but reflects a fundamental difference in the roles and self-understanding of each group.
With OSS, the hacker is often an end user but more centrally the creator and modifier of code. And to the extent that hackers form a community, it is a community of problem-solvers addressing issues that concern their work directly. In his seminal book on hacker open-source culture, The Cathedral and the Bazaar, Eric Raymond suggests that “Every good work of software starts by scratching a developer’s personal itch.” Contrast this with the relationship faculty have to the educational content they use: for most, it’s a tool for teaching a class, a means of supporting an activity that is largely extrinsic to the tasks of creating and modifying pedagogical content. Most instructors are not editors, let alone creators of their classroom content; they are simply end users.
If there’s a personal itch to scratch at all, it’s usually in the area of original scholarship and research, not teaching materials (let’s recall that the Internet was born to share research, not lesson plans). For most instructors, the textbook is a convenient package, without which the task of managing a class would be that much more laborious. Commercial publishers have long recognized what the OER movement has not: that often-overworked and underpaid instructors are looking to content and course technology to make their lives easier, not to take on the additional responsibility of managing their own content without financial recognition for that labor. Unlike the open-source hacker, the thrill of belonging to a community of problem-solvers of content simply isn’t their thing. To truncate an otherwise large topic, instructors are not hackers and that changes everything. Or it should have for the OER movement.
The recent gains of, and the growing prospects for, OER are, in fact, a tacit acknowledgement of this difference. No doubt the single biggest success to date for the movement is the OpenStax project, but this success breaks any illusion that the practice of OER is analogous to that of open software. Connexions, the OpenStax predecessor project at Rice, languished for years as an open-source content platform until Rice hired Joel Thierstein as associate provost to turn the project around. What did he do? Thierstein, who previously worked in the private sector developing content for the telecommunications industry, had a simple and very powerful idea: raise grant money to hire the same companies that ghostwrite textbooks for the traditional publishers, and then release the texts into the public domain under the most open license available.
As commercial textbook equivalents, their use required no behavioral changes for faculty. They would not be “learning objects” or fragments that required additional faculty work. Faculty could use them as teaching tools, just as they would conventional content, except, in this case, they’re free. Like the commercial publishers, Thierstein rightly understood that faculty want an easy and straightforward way to adopt high quality and appropriate content. Thierstein’s success enabled Rice to go forward with additional fund-raising and the Connexion’s rebranding as OpenStax. A simple idea has had a significant impact.
And yet for all the success of OpenStax, it’s also clear that a free version of a commercial text will never alone be sufficient for OER to reach the mainstream, nor should it be. Some learning technologies, either already in use or emerging, have the capacity to improve student success significantly. The OER movement’s almost singular focus on cost can obscure the larger objective -- actually getting more students through to graduation while ensuring that they’ve learned (and enjoyed learning) something along the way.
The risk for the OER movement is that it unwittingly reinforces the kind of resource disparities we see everywhere else in our society: a situation in which the well-off enjoy content with the latest technologies and practices, and the not-so-well-off manage without them. To be sure, OpenStax partnerships with third-party technology partners are a recognition of this need, but these relations are still established within the traditional publisher/tech partner binary model, with the difference that the core content is low-cost or free. As important as that project is, it doesn’t yet realize the promise of OER as disaggregated high-quality content created and modified from anywhere.
A better way forward is to compensate the stakeholders -- faculty, copyright holders, and technologists, principally -- for their contributions to the OER ecosystem. This can be done by charging students nominally for the OER courses they take or as a modest institutional materials fee. When there are no longer meaningful costs associated with the underlying content, it becomes possible to compensate faculty for the extra work while radically reducing costs to students. While I launched a new venture to do this, what’s needed are lots of entities -- for-profit and nonprofit -- to experiment with funding models. It’s all achievable and there will likely be no single way to accomplish it.
From this will emerge a new breed of courseware, one that preserves the low cost and flexibility of open content while embracing learning technologies that support faculty and student success. Certainly such a model involves costs, though not so much for the content as for the tools that improve its use and for the people on the ground who are actually doing the work of curating and adapting materials. Align the incentives in the right way, and this model of for openness can empower faculty members and institutions in unprecedented ways. It will encourage local innovation so that, over time, the courseware, now unlocked and financially supported, becomes an expression of the teaching itself.
Openness, then, lends itself to a new order of distributed content development that includes outstanding learning technologies; I think all the bloggers mentioned above recognize this. But precisely because instructors are not hackers and belong to an entirely different community of practice, a system for distributed content development also needs to be accompanied by a system of distributed financial incentives. When this all comes together -- and it will -- then courseware will escape commodification and become a creative and low-cost force in education. Only then should we begin to count the percentages.
A technological visionary created a little stir in the late ‘00s by declaring that the era of the paper-and-ink book as dominant cultural form was winding down rapidly as the ebook took its place. As I recall, the switch-off was supposed to be complete by the year 2015 -- though not by a particular date, making it impossible to mark your day planner accordingly.
Cultural dominance is hard to measure. And while we do have sales figures, even they leave room for interpretation. In the June issue of Information Research, the peer-reviewed journal’s founder T.D. Wilson takes a look at variations in the numbers across national borders and language differences in a paper called “The E-Book Phenomenon: A Disruptive Technology.” Wilson is a senior professor at the Swedish School of Library and Information Science, University of Borås, and his paper is in part a report on research on the impact of e-publishing in Sweden.
He notes that the Book Industry Study Group, a publishing-industry research and policy organization, reported last year that ebook sales in the United States grew by 45 percent between 2011 and 2012 – although the total of 457 million ebooks that readers purchased in 2012 still lagged 100 million copies behind the number of hardbacks sold the same year. And while sales in Britain also surged by 89 percent over the same period, the rate of growth for non-Anglophone ebooks has been far more modest.
Often it’s simply a matter of the size of the potential audience. “Sweden is a country of only 9.5 million people,” Wilson writes, “so the local market is small compared with, say, the UK with 60 million, or the United States with 314 million.” And someone who knows Swedish is far more likely to be able to read English than vice versa. The consequences are particularly noticeable in the market for scholarly publications. Swedish research libraries “already spend more on e-resources than on print materials,” Wilson writes, “and university librarians expect the proportion to grow. The greater proportion of e-books in university libraries are in the English language, especially in science, technology and medicine, since this is the language of international scholarship in these fields.”
Whether or not status as a world language is a necessary condition for robust ebook sales, it is clearly not a sufficient one. Some 200 million people around the world use French as a primary or secondary language. But the pace of Francophone ebook publishing has been, pardon the expression, snail-like -- growing just 3 percent per year, with “66 percent of French people saying that they had never read an ebook and did not intend to do so,” according to a study Wilson cites. And Japanese readers, too, seem to have retained their loyalty to the printed word: “there are more bookshops in Japan (almost 15,000 in 2012) than there are in the entire U.S.A. (just over 12,000 in 2012).”
Meanwhile, a report issued not long after Wilson’s paper appeared shows that the steady forward march of the ebook in the U.S. has lately taken a turn sideways. The remarkable acceleration in sales between 2008 and 2012 hit a wall in 2013. Ebooks brought in as much that year ($3 billion) as the year before. A number of factors were involved, no doubt, from economic conditions to an inexhaustible demand for Fifty Shades of Grey sequels. But it’s also worth noting that even with their sales plateauing, ebooks did a little better than trade publishing as a whole, where revenues contracted by about $300 million.
And perhaps more importantly, Wilson points to a number of developments suggesting that the ebook format is on the way to becoming its own, full-fledged disruptive technology. Not in the way that, say, the mobile phone is disruptive (such that you cannot count on reading in the stacks of a library without hearing an undergraduate’s full-throated exchange of pleasantries with someone only ever addressed as “dude”) but rather in the sense identified by Clayton Christensen, a professor of business administration at the Harvard Business School.
Disruption, in Christensen’s usage, refers, as his website explains it, to “a process by which a product or service takes root initially in simple applications at the bottom of a market and then relentlessly moves up market, eventually displacing established competitors.” An example he gives in an article for Foreign Affairs is, not surprisingly, the personal computer, which was initially sold to hobbyists -- something far less powerful as a device, and far less profitable as a commodity, than “real” computers of the day.
The company producing a high-end, state-of-the-art technology becomes a victim of its own success at meeting the demands of clientele who can appreciate (and afford) its product. By contrast, the “disruptive” innovation is much less effective and appealing to such users. It leaves so much room for improvement that its quality can only get better over time, as those manufacturing and using it explore and refine its potentials – without the help of better-established companies, but also without their blinkers. By the time its potential is being realized, the disruptive technology has developed its own infrastructure for manufacture and maintenance, with a distinct customer base.
How closely the ebook may resemble the disruptive-technology model is something Wilson doesn’t assess in his paper. And in some ways, I think, it’s a bad fit. The author himself points out that when the first commercial e-readers went on the market in 1998, it was with the backing of major publishing companies (empires, really) such as Random House and Barnes & Noble. And it’s not even as if the ebook and codex formats were destined to reach different, much less mutually exclusive, audiences. The number of ebook readers who have abandoned print entirely is quite small – in the US, about five percent.
But Wilson does identify a number of developments that could prove disruptive, in Christensen’s sense. Self-published authors can and do reach large readerships through online retailers. The software needed to convert a manuscript into various ebook formats has become more readily available, and people dedicated to developing the skills could well bring out better-designed ebooks than well-established publishers do now. (Alas! for the bar is not high.)
Likewise, I wonder if the commercial barriers to ebook publishing in what Wilson calls “small-language countries” might not be surmounted in a single bound if the right author wrote the right book at a decisive moment. Unlike that Silicon Valley visionary who prophesied the irreversible decline of the printed book, I don’t see it as a matter of technology determining what counts as a major cultural medium. That’s up to writers, ultimately, and to readers as well.
So you almost have that book contract in your grasp. You’ve had your most trusted colleagues drop a favorable hint about your work in the ear of the acquisitions editor at the best press in your field. You carefully (and, of course, unobtrusively) stalked said editor at the spring meeting of your disciplinary society, and managed to “accidentally” meet at the drinks reception.
You wrote a follow-up e-mail — not too soon, not too late — with a general query describing your idea and how it fits into the broader publication program at Desirable University Press. And when you received back that warm response — O, happy day! — you observed a decent interval before sending off your polished proposal, on which, of course, you’ve been working ceaselessly for the last six months.
And now you’re refreshing your inbox every five minutes or so, waiting for that hoped-for green light.
Did you ever think — after all your work — that what you were producing was a luxury?
Probably not. All you really want is for the best publisher, whatever that means to you, to publish it; and for your ideas to receive notice in the reviews that matter in your field. Well, you’d probably like your promotion and tenure committee to be impressed, too. Royalties would be nice, but more than anything, you want impact.
Yet maybe you think it should be a luxury, after all the effort and sweat and heartache you’ve invested in it. As far as you’re concerned, it’s pure gold, and should be priced accordingly. You can be sure it will. According to one book provider for university libraries, the average cover price of an academic book now stands at around $90.00 — a few multiples more than the average price of a book.
It’s not just the price that makes scholarly books a luxury. Think about this line from a recent study of luxury goods: “In luxury, quality is assumed, price does not have to be explained rationally; it is the price of the intangibles (history, legend, prestige of the brand)."
That sounds a lot like the system of scholarly publishing we have come to know and love (and/or loathe). It’s exactly the history, legend, prestige of the brand — the welter of such elements as the name of a given press, the backlist of titles in its catalog, the reputation of the institution with which it is (to a greater or lesser degree) affiliated, the grand old stories we tell about the way a certain editor championed a book against a sea of troubles — that gives the whole enterprise a whiff of mystique and nobility. Scholarly publishing, like any other luxury good, is a reputation-driven business producing goods for a select few at high prices, which in turn transmit a signal about the value of the good — and the prestige of the producer.
But as any social psychologist can tell you, reputations are a bad shortcut to reality. On the contrary, they can be a fruitful source of bias — filled with meaning we make instead of content we assess.
If you think about it, it’s surprising that scholarly publishing is — and seemingly should be — a business in which brand reputation is not just operative, but essential. Stories abound of promotion and tenure committees advising candidates of the four or five publishers with which a book they present must be placed — at least if they have hopes of further advancement. But of course to say this is to mistake the brand for the content. After all, scholarly merit is supposed to be a function of, well, merit, not mere reputation. Isn’t it? Aren’t we supposed to read the books, and not merely the spine?
• • •
The old chestnut that academic publishing is in a state of crisis may or may not be true; that all depends on your definition of “crisis.” What is certainly true is that the nature of scholarly publishing has changed, in some ways so much that it would scarcely be recognizable to the founding generation of university press directors.
After all, it is only meaningful to distinguish “scholarly publishing” from all other sorts of publishing if it has not just a distinctive content but a distinctive purpose.
The content is indisputably meant to be scholarly work of great merit. Even within a single field disagreements may (and do) arise about exactly what merit is, but no one seriously disputes that the content provided by academic presses is, or ought to be, characterized by a kind of defensible and substantive merit.
That is to say, scholarly publishing — at least in the days American university presses were established — was seen as a way for scholars to communicate their ideas with each other in ways that would not depend, at least not critically, on the market. Exactly because the market would be a poor judge of scholarly merit, producing scholarly work was seen as an extension of institutional mission. Colleges and universities exist not merely to create, but to communicate knowledge; and the social privileges conferred because of that mission (notably, qualification to receive charitable gifts incentivized by the tax code) entail social responsibilities to support both the process and the production of research.
So here’s a thesis. If there truly is a crisis in scholarly publishing, it has arisen from this fundamental first cause: the end of the era in which institutions sponsoring presses saw the publishing of scholarship as something near to the heart of their core mission, and deserving to be supported on those terms. Result: What was never intended to be a system left to the vicissitudes of the market has become exactly that. Scholarly books have become high-priced, prestige-driven luxury goods not by accident, but by forgetfulness.
Symptoms of this shift abound. Presses unable to break even are closed, or severely curtailed, as universities refocus on “strategic priorities." Book prices rise at a rate far higher than inflation in order to cover publishers’ fixed costs as institutional subventions vanish. Authors are chosen not so much on the basis of prize-winning, promising early work but rather because they can command the services of a literary agent.
It doesn’t have to be this way. To solve the crisis we should speak frankly of its causes, and imagine alternatives to received structures. There are three points to keep in view as we invent and test alternatives.
• Open access doesn’t mean poor quality. The push for open access, an idea received with acute suspicion in some quarters, has come about in no small way as a direct consequence of the predictable failure of a market-based system for scholarly publishing to serve its audience.
As a species we are pretty hardwired to associate cost with value — one reason why luxury goods, for which no rational explanation can suffice, yet exist. That is the hardest challenge for open-access advocates (of which I am one) to overcome; how can something free be trusted? But there is no logical connection between the price (as distinguished from the production cost) of a scholarly work and its merit. Yes, assuring quality is a costly business. But there are other ways of paying those costs than depending on purchase-price revenue.
• Communicating ideas is (or should be) critical to the mission of all institutions. The relationship between publishing and the institutional mission needs to be reassessed. Real and lasting change in the broken system of scholarly communication cannot be accomplished by publishers, or libraries, alone. Ultimately it will take a critical mass of institutional leaders able to see how abandoning academic presses to the market was, in effect, abdicating a core scholarly responsibility. I am fortunate to work in an institution led by such people, with the result that the revenue on which we will do the expected work of assuring quality and publishing scholarship will be borne by institutional commitments instead of consumers.
• Disruptive innovation is messy. Changing the revenue model — shifting the source of the revenue from either end of the value chain (purchases by consumers at one end, or “author fees” at the other) to institutional commitments at the center — is made possible by new technologies for distribution (digital publishing). But will also mean the emergence of a new set of ideas for the kinds of institutions that do scholarly publishing.
For one thing, there may well be a larger number of publishers producing a smaller number of works on a focused set of topics. Most of the proposed solutions to the “crisis,” both those offered by publishers and those sponsored by foundations, have been essentially focused on preserving the current demographic profile of university presses. It is not self-evident that this is the only solution. Liberal arts colleges (to cite my own example) have a valuable and distinct contribution to make to the identification of what constitutes “scholarship” — but, with a few admirable exceptions, have been frozen out of the conversation by the sheer volume of production required by a market-dependent system. That can now change.
So, too, digital tools make possible not only different ways of producing work, but different ways of organizing the work of publishers. University presses, by and large, are organized as hierarchical firms — and with good reason; such organizations manage market pressures efficiently. But academic publishing could become much more like a commons, adapting to its own purposes Yochai Benkler’s ideas of commons-based peer production in which the uniting thread is a shared passion for the development and distribution of new ideas among colleagues and peers. Said in different terms, what if the future of academic publishing looked less like the Encyclopædia Britannica, and more like Wikipedia?
Good luck on the book contract. When you get it — and, of course, you will — remember why you got into your field in the first place. It probably wasn’t to produce luxuries, but to create ideas and communicate them to your peers — the same reason I wrote this piece. So when you have an idea for your next book, think about working with a publisher who shares those goals.
Mark Edington is director of the Amherst College Press.
“This might be too geeky for a column,” said the subject line of a reader's email, “but just in case …”
It sounded like a challenge, and I took the bait. The topic in question? A new statistical instrument to quantify the degree of open access for scholarly journals. In other words, exactly geeky enough.
The metric can, in principle, be used with journals in any field. At this stage, though, it’s only really being talked about in library and information science (LIS) circles. It represents a challenge to academic librarians to “walk the talk” in regard to their own professional publications. But it's an “inside baseball” discussion that merits attention outside the dugout, given the role of academic librarians in shaping the whole terrain of 21st-century scholarly communication.
That role is crucial but often overlooked. Academic librarians still have the core responsibilities of managing acquisitions and maintaining subscriptions, of course, but must also keep track of the new array (constantly growing, across all disciplines) of digital-format archives, databases, and other repositories. Plus they have the pedagogical task of instructing patrons in how to use new research tools as they become available.
As if that weren’t enough to do, research libraries have been mutating into scholarly publishers in their own right, sometimes in cooperation with their universities’ presses. To borrow a phrase from a recent paper in the journal College & Research Libraries, academic librarians have gone beyond being “gatekeepers of knowledge” -- in charge of its storage and retrieval -- to playing an active role in its promulgation.
Sugimoto et al. sent a survey to their colleagues at 91 academic libraries in the U.S. about how they kept track of developments within library and information science itself. Just over six hundred people filled out all or most of the questionnaire.
The findings reveal a profession that's seriously interested in its own rapidly changing role in scholarly communications: "A vast majority (94.2 percent) consult professional literature" -- defined to include scholarly journals as well as less formal venues such as trade publications and blogs — "on, at the very least, a monthly basis.” More than a quarter of respondents said they did so daily.
Over 80 percent of respondents indicated they followed peer-reviewed LIS journals. More three-quarters kept up with conference papers and proceedings in their field. "Nearly three-quarters," the paper notes, "reported sharing the results of research or reports of best practices" with their colleagues, with more than half (54.2 percent) doing so in peer-reviewed journals."
The other, more granular statistics in the paper are significant, but I want to stress a couple of important big-picture issues suggested by the study. On the one hand, the Indiana researchers describe a kind of virtuous circle. Academic librarians are eager both to produce and to exchange knowledge about their field -- not just to publish but to read one another’s work and to incorporate it into their own activity. (And that is a good thing for the rest of us, prone though we are to taking their efforts for granted.)
The paper also stresses that academic librarians have been advocates for "new (particularly open) systems of scholarly communication." They have shown prescient and growing support for open-access publishing for a number of years now. But here's where things become problematic, because it sounds like the library and information studies people could use some "new (particularly open) systems of scholarly communication” of their own.
Librarians who are also tenure-track faculty need to publish in the field's major peer-reviewed journals. (Forty percent of respondents to the Indiana researchers’ survey were either tenured or on the track.)
But with a prestigious journal, the lag time between between acceptance and publication can run to a year or more. That delay "impede[s] the timeliness and back-and-forth exchanges that are required for effective scholarly communication." And in "technology-related fields ... research may lose its currency if it is not delivered expediently."
Then there is the conundrum assessed in another recent study, Micah Vandergrift and Chealsye Bowley's "Librarian Heal Thyself: A Scholarly Communication Analysis of LIS Journals,” published last month by In the Library With a Lead Pipe, which is probably the best name ever for a peer-reviewed journal. (Vandergrift is a scholarly communications librarian at Florida State University. Bowley is library supervisor at FSU's Florence Study Center in Italy.)
While academic librarians have been strong advocates of open-access publishing, many LIS researchers seem to exempt their own field. One study the authors cite found that half of respondents “cared mostly about publication without considering the policies of the journals in which they published and that only 16 percent had exercised the right to self-archive in the institutional archive.”
Vandegrift and Bowley assembled data on the policies of 111 library and information science journals and found that with a large minority of them (well over a third) the author signs over all copyrights to the publisher — “including but not limited,” as the contracts run, "to the right to publish, republish, transmit, sell, distribute, and otherwise use the [article] in whole or in part … in derivative works throughout the world, in all languages, and in all media of expression now known or later developed.” (You could probably get away with giving the PDF to a close friend, just be very, very quiet about it.)
Just a handful of journals “had direct or implied policies regarding what the author is allowed to do with specific versions of the same work,” including self-archiving in an institutional repository. “A significant percentage of our professional literature,” Vandegrift and Bowley conclude, "is still owned and controlled by commercial publishers whose role in scholarly communication is to maintain ’the scholarly record,’ yes, but also to generate profits at the expense of library budgets by selling our intellectual property back to us.”
A norm doesn’t remain a norm unless nearly everyone involved acquiesces to it. A couple of years ago The Economist referred to the signs of growing unhappiness with the state of scholarly publishing as "The Academic Spring," and Vandegraft and Bowley's paper is part of it.
"A great example of a proactive and outspoken group,” Bowley told me in an email exchange, "was the Journal of Library Administration's Editor-in-Chief Damon Jaggars and entire editorial board who resigned in March 2013 … [over] an author agreement that they thought was "too restrictive and out of step with the expectation of authors.” Vandegrift was among the authors who had requested a Creative Commons license or to retain their copyright — an open-access policy that Taylor & Francis, the journal’s publisher, rejected.
Continuing the effort to bring the publishing practices of LIS researchers into accord with its ethos, Vandegrift and Bowled have created an instrument called the Journal Openness Index. It uses a points system for the various degrees of control over copyright and reuse indicated in a journal’s stated policies. The higher the JOI, the more open-access the publication. Crunching the numbers for several leading LIS titles, the authors find that the journals of professional societies get the highest scores while those from commercial-academic publishers get the lowest, with journals issued by university presses falling somewhere in between.
That is not exactly counterintuitive. But Vandegrift and Bowley offer JOI as a step in the direction of establishing open access as one of the criteria for how colleagues assess the value of scholarship in their own field.
"I imagine all the students that come out of library schools,” Vandegraft said by email, who "go into public librarianship and all of a sudden are cut off from access to the literature that can and should inform the practice of their work, which they were trained to do in library schools where ‘access' is touted as a value. I think we can do better, and I think it will take articles like this one to push librarians to be more proactive and to ask our faculty colleagues to join us."
As for applying JOI to journals in other fields, the idea is feasible but demanding. “Such a project would need proper backing,“ Bowley told me, "whether in the form of a team or institutional and financial support, in order to ensure its long-term upkeep. It could also be partially done through crowdsourcing the information, though. If a professional organization or institution is interested in taking up the project, they would certainly be welcomed to do so.”
I hope their colleagues take them up on it. An informed librarian is a helpful librarian — and it’s a fool who underestimates the value of that.