Renting textbooks is a popular option for frugal students, but one company has for years -- and without notice -- erroneously sent students to collection agencies, in some cases demanding hundreds of dollars in replacement fees.
The Book Industry Study Group just reported that 52 percent of college students surveyed agreed that “I would rather pay $100 for a learning solution that improves my result by one letter grade and reduces my study time by 25 percent than $50 for my current textbook.” As a professor, I am troubled by declines in the effort many in my classes are willing to put into doing the reading I assign. But as an administrator, I also recognize students’ concerns with scoring high grades, juggling internships and part-time jobs, and minimizing expenses.
Multiple factors are at play here: grade inflation, social pressures, student debt, the iffy job market. Further relevant is the time students report studying each week (now an average of 15 hours, down from about 24 in the 1960s). Yet one of the major culprits is the price tag on textbooks and other course materials, estimated at around $1,200 a year -- assuming you buy them.
Faculty members and students alike are in a quandary over how to handle textbook costs, especially for those hefty tomes often used in introductory courses. Increasingly, students are opting not to purchase these books -- not even rent them. Digital formats (and rentals of any kind) tend to be less expensive than buying print, though frequently the decision is not to acquire the materials at all. The U.S. Public Interest Research Group reports that two-thirds of students have refrained from purchasing at least one assigned textbook because of price.
Recently, American University ran focus groups with our undergraduates, looking to get a sense of how they make textbook decisions. For courses in their major, they are willing to lay out more money than for general education classes, which they perceive (often wrongly) not to require much work anyway. Over all, the common sentiment is that spending more than about $50 for a book is excessive. And of course there are plenty of college textbooks with prices that exceed $50.
This message was reinforced by an anecdote shared with me by Michael Rosenwald, a reporter for The Washington Post. While interviewing American University students for a story on college reading and book-purchasing habits, Rosenwald asked, “Who buys course materials from the campus store these days?” Their answer: “Freshmen,” revealing that once students settle into campus life, they discover less expensive ways to get their books -- or devise strategies on how much reading they'll actually do.
For faculty members, the challenge is to find a workable balance between the amount of reading we would like those in our classes to complete and realistic expectations for student follow-through. While some full-length books may remain on our required list, their numbers have shrunk over time. These days, assignments that used to call for complete books are being slimmed down to single chapters or articles. Our aspirations for our students to encounter and absorb substantial amounts of written material increasingly rub up against their notions of how much is worth reading.
The numbers tell the tale. That same Book Industry Study Group report noted that between 2010 and 2013, the percentage of students indicating that classes they were taking required “no formal course materials” rose from 4 percent to 11 percent.
Student complaints are equally revealing. When Robert Putnam’s Bowling Alone came out, I assigned the book to a group of honors undergraduates, eager for them to experience careful, hypothesis-driven, data-rich social science research. One member of the class balked. In fact, she publicly berated me, demanding to know why I hadn’t told the group about the “short version” of the book -- meaning an article Putnam has written years earlier, before his full study was completed. She went on to inform the class what she had learned from a teacher in high school: books aren’t worth reading, only articles. The rest of what’s in books is just padding.
The author and teacher in me cringed at how this young woman perceived the intellectual enterprise.
For students, besides the understandable limitations on time and finances, there is the question of value proposition. If the objective is learning that lasts, maybe buying the book (and reading it) is worth it. But if the goal is getting a better grade, maybe not. All too often today, it is the grade that triumphs.
One player that faculty members generally leave out of the equation is the publishing industry, including not just the companies whose names are on the spines but the people who print the books, supply the paper and ink, and operate the presses. Recently I spoke at the Book Manufacturers’ Institute Conference and was troubled by the disconnect I perceived between those who produce and distribute textbooks and those who consume them. As students buy fewer books, publishers do smaller print runs, resulting in higher prices, which in turn reinforces the spiral of lower sales.
A potential compensatory financial strategy for publishers is issuing revised editions, intended to render obsolete those already in circulation. In reality, students often take a pass on these new offerings, waiting until they appear on the used book market. Yes, sometimes there is fresh, timely material in the new versions, but how often do we really need to update textbooks on the structure of English grammar or the history of early America?
When speaking with participants in the book manufacturers’ conference, I became increasingly convinced that the current model of book creation, distribution and use is not sustainable. What to do?
There is a pressing need for meaningful collaboration between faculty members and the publishing industry to find ways of producing materials designed to foster learning that reaches beyond the test -- and that students can be reasonably expected to procure and use. I would like to hope that textbook publishers (who I know are financially suffering) are in conversation not just with authors seeking book contracts but with faculty members who can share their own assignment practices, along with personal experiences about how students are voting with their feet regarding purchasing and reading decisions.
To help foster such dialogue, here are some suggestions:
Gather data on shifts in the amount and nature of reading that faculty assign, say, over the past 10-20 years.
Reconsider publishing strategies regarding those handsome, expensive, color-picture-laden texts, whose purpose is apparently to entice students to read them. If students aren’t willing to shell out the money, the book likely isn’t being read. Focus instead on producing meaningful material written with clear, engaging prose.
Rethink when a new edition is really warranted and when not. In many instances, issuing a smaller update, to be used as a supplement to the existing text, is really all that’s needed. (Think of those encyclopedia annuals with which many of us are familiar.) Students -- and far more of them -- will be willing to pay $9.95 for an update to an older book than $109.95 for a new one. McDonald’s learned long ago that you can turn a handsome profit through high volume on low-cost items. The publishing industry needs to do the math.
Make faculty members aware of the realities of both textbook prices (some professors never look before placing book orders) and student reading patterns. I heartily recommend hanging out in the student union (or equivalent) and eavesdropping. You will be amazed at how cunning -- and how honest -- students are about their study practices.
Encourage professors to assign readings (especially ones students are asked to pay for) that maximize long-term educational value.
Educate students about the difference between gaming the assignment system (either for grades or cost savings) and learning.
The results can yield a win-win situation for both the publishing industry and higher education.
Naomi S. Baron is executive director of the Center for Teaching, Research, and Learning at American University and author of Words Onscreen: The Fate of Reading in a Digital World.
In Friday’s decision in Cambridge University Press v. Patton, the U.S. Court of Appeals for the Eleventh Circuit followed decades of jurisprudence in casting aside bright line rules for determining whether faculty made fair use of copyrighted material. This is regrettable, as the celebrated 2012 district court opinion in the same case had opened up the possibility of teaching faculty how to properly make fair use of material using plain terms and easy-to-understand concepts, while the appeals court opinion returns us to the days of case-by-case holistic analysis and detailed exceptions, loopholes, and caveats.
The case revolves around a challenge by several companies that published non-textbook scholarly works to Georgia State University’s electronic reserve systems, wherein faculty and librarians would scan in excerpts of books for students to access digitally, a technological improvement over the traditional practice of leaving a copy or two on reserve at the library circulation desk. The publishers claimed mass copyright infringement while Georgia State cited the fair use provisions of Section 107 of the Copyright Law.
The district court exhaustively analyzed each work uploaded to electronic reserves, finding only five in violation out of the dozens submitted by the publishing companies, by taking a new twist to the law’s four factors for analysis:
The purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
The nature of the copyrighted work;
The amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
The effect of the use upon the potential market for, or value of, the copyrighted work.
Traditional fair use analysis calls for a case-by-case analysis of each potential use, independently weighing the four factors holistically, which is difficult and often requires knowledge of unavailable facts (such as the effect on the market of the work, which is nearly impossible for those outside of the company to guess at). (For instance, the Supreme Court in Campbell v. Acuff-Rose Music, Inc. specifically discarded any use of “bright line rules” for determining fair use of copyrighted material.)
Judge Orinda Evans went a different route. She found that de minimis use (such as when a faculty member posts a work but no student ever accesses it) is not a violation, and that in most cases, using one chapter or 10 percent of a book that is under copyright protection would meet the fair use test. The judge decided to clearly assign winners in each of the four factors, and then give the overall win to the party with the majority of factors in their favor.
She wrote that factors one and two almost always went in favor of nonprofit higher educational use of academic works. While a determination of factor four may be difficult for a faculty member to determine, and would likely go in favor of the publishers, the judge ruled that 10 percent or one chapter of a work that is digitally available would meet the fair use test for factor three. Adding factors 1, 2, and 3 together let her find a majority and, thus, fair use, even without factor four.
Note that these findings were for those works that could be purchased digitally. In another section, the judge applied some behavioral economics to factor four by finding that for those works that a publisher did not make available digitally, a faculty member could use approximately 18 percent of the work and still win a fair use analysis. That larger limit of factor 3 could encourage publishers to make their works available at reasonable prices, so as to discourage fair use without remuneration.
This was a groundbreaking opinion that allowed intellectual property lawyers in higher education to clearly explain to administrators and faculty members which uses would and would not be fair. Rather than require our botany and geography professors to also become copyright scholars, we could provide them with reasonable tests to ensure they properly balanced the interests of students in accessing the content with the interest of publishers in compensation for developing the content. While this wasn’t the first effort to develop fair use standards, it was the clearest, and the first time that such standards were set by a court.
The appeals court rejected this analysis and found that the “District Court did not err in performing a work-by-work analysis of individual instances of alleged infringement in order to determine the need for injunctive relief. However, the District Court did err by giving each of the four fair use factors equal weight, and by treating the four factors mechanistically.”
The appeals court instead called for a return to the holistic analysis. Rejecting the 10 percent or one chapter bright-line rule, the appellate court wrote that “the District Court should have performed this analysis on a work-by-work basis, taking into account whether the amount taken -- qualitatively and quantitatively -- was reasonable in light of the pedagogical purpose of the use and the threat of market substitution.”
The appeals court decision stands on solid precedential ground, and it is not the first court to call for a holistic and case-by-case analysis. While one can defend that decision by looking to the past, the decision is a poor one for those who look to the future. As content becomes more available in varying formats, and our faculty, staff and students are faced with myriad opportunities to pay for content, make fair use, or violate copyrights of authors and creators, the presence of clear standards and easily digestible rules provided higher education with a fighting chance to educate our academic community and encourage proper balancing and fair (but not inappropriate) use of content.
William Patry and Melville Nimmer, the two seminal thinkers in copyright law, each devote hundreds of pages to explaining copyright law. Their sets of volumes, which cost thousands of dollars, provide a comprehensive analysis of fair use and all of its details. But these books and detailed analysis are well outside the scope of what we expect of our faculty members who do not specialize in intellectual property, and our instructors simply do not have the time to conduct an exhaustive analysis of each use, even if they did take the time to learn all the permutations of the fair use analysis. This isn’t to say that they can’t, but to state the reality that they won’t.
Frankly, the dueling decisions in these cases, and the numerous articles and statements by serious copyright scholars on both sides of this analysis, show that even those who steep themselves in the details of fair use can disagree on whether a certain use is fair or violative.
When intellectual property law experts cannot agree, we should not expect our history and math faculty to do justice to the fair use analysis each time.
Instead, faculty will divide into two camps. One group will “throw caution to the wind” and use whatever content they wish in whatever form they desire, hoping never to raise the ire of the publishing companies.
The other, out of an abundance of caution, will self-censor, and fail to make fair use of content for fear that they might step over a line they cannot possibly identify, and can never be certain of until a judge rules one way or the other. Either way, our students and the publishers lose out.
The district court opinion shed some light into the murky swamp of fair use analysis. The Eleventh Circuit opinion dims that light, and threatens to return us to a regime wherein faculty who are not experts in copyright law will either use without consideration of the law or self-censor, diminishing the utility of the concept of fair use.
The Constitution teaches that the purpose of copyright is to “promote the Progress of Science and useful Arts.” The district court opinion found that small excerpts available to students “would further the spread of knowledge.”
Arming faculty with clear rules and standards to properly balance fair use of content would go a long way toward achieving this goal.
Joseph Storch is an attorney in the State University of New York Office of General Counsel. The views expressed here are his own.
The dominion of open educational resources is apparently looming large, if one were to judge by a blog thread touched off with a panel discussion at a recent Knewton event. David Wiley, participating in the panel, made the bold claim that “in the near future, 80 percent of textbooks would be replaced by OER content.” Jose Ferreira responded critically to that view a few days later with a blog post, to which Wiley offered a dissenting reply. Michael Feldstein then weighed in with a dissenting perspective of his own.
It’s a spirited and fruitful discussion; well worth a read. Their comments, though, didn’t tackle what I’ve come to see as the core issue for the OER movement, a foundational assumption that has crimped its progress. The assumption holds that because open-source educational content is like open-source software -- in that it’s free content that you can chop up, remix, and share with anyone -- its application and uses should follow in a similar way.
The short history of the two movements makes clear that this is not the case. As David Wiley points out, the first openly licensed educational materials were published more than 15 years ago, around the time that Linux led the movement of open-source software (OSS) into the mainstream. So why did one open-source movement take off as the other tarried on the margins, championed only by the most stalwart advocates?
While Linux has long been part of standard practice, and our daily computing lives would be unthinkable without open-source software, more than 90 percent of faculty textbook adoptions in the U.S. are still locked-down, expensive commercial materials. Most don’t doubt the unsustainability of the present course (including most publishers), but it’s also plain to see that the OER movement had not yet offered a truly satisfying alternative. The failure of OER to become mainstream at this point is only underscored by the myriad forces working in its favor: economic pressures, greater administrative accountability, government oversight and budget cuts, and a truly broken publisher model.
A clear reason for the different trajectories is the commercial support that OSS has enjoyed, and that OER has not. Contrary to the common view that OSS has advanced largely through loosely organized communities of volunteers, it’s actually often strongly supported through private enterprise. More than 80 percent of the contributions to Linux, for example, come today from companies like Google and Samsung. But the success of OSS isn’t simply through commercial appropriation. Instead, companies were able to support OSS because they were building on an already-present foundation of voluntarism in the hacker community. While a volunteer community of course exists in OER, it does not have the depth and breadth of its OSS counterpart. The voluntarism of the hacker community does not, in other words, map well onto the community of academic instructors.This situation isn’t an accident of history but reflects a fundamental difference in the roles and self-understanding of each group.
With OSS, the hacker is often an end user but more centrally the creator and modifier of code. And to the extent that hackers form a community, it is a community of problem-solvers addressing issues that concern their work directly. In his seminal book on hacker open-source culture, The Cathedral and the Bazaar, Eric Raymond suggests that “Every good work of software starts by scratching a developer’s personal itch.” Contrast this with the relationship faculty have to the educational content they use: for most, it’s a tool for teaching a class, a means of supporting an activity that is largely extrinsic to the tasks of creating and modifying pedagogical content. Most instructors are not editors, let alone creators of their classroom content; they are simply end users.
If there’s a personal itch to scratch at all, it’s usually in the area of original scholarship and research, not teaching materials (let’s recall that the Internet was born to share research, not lesson plans). For most instructors, the textbook is a convenient package, without which the task of managing a class would be that much more laborious. Commercial publishers have long recognized what the OER movement has not: that often-overworked and underpaid instructors are looking to content and course technology to make their lives easier, not to take on the additional responsibility of managing their own content without financial recognition for that labor. Unlike the open-source hacker, the thrill of belonging to a community of problem-solvers of content simply isn’t their thing. To truncate an otherwise large topic, instructors are not hackers and that changes everything. Or it should have for the OER movement.
The recent gains of, and the growing prospects for, OER are, in fact, a tacit acknowledgement of this difference. No doubt the single biggest success to date for the movement is the OpenStax project, but this success breaks any illusion that the practice of OER is analogous to that of open software. Connexions, the OpenStax predecessor project at Rice, languished for years as an open-source content platform until Rice hired Joel Thierstein as associate provost to turn the project around. What did he do? Thierstein, who previously worked in the private sector developing content for the telecommunications industry, had a simple and very powerful idea: raise grant money to hire the same companies that ghostwrite textbooks for the traditional publishers, and then release the texts into the public domain under the most open license available.
As commercial textbook equivalents, their use required no behavioral changes for faculty. They would not be “learning objects” or fragments that required additional faculty work. Faculty could use them as teaching tools, just as they would conventional content, except, in this case, they’re free. Like the commercial publishers, Thierstein rightly understood that faculty want an easy and straightforward way to adopt high quality and appropriate content. Thierstein’s success enabled Rice to go forward with additional fund-raising and the Connexion’s rebranding as OpenStax. A simple idea has had a significant impact.
And yet for all the success of OpenStax, it’s also clear that a free version of a commercial text will never alone be sufficient for OER to reach the mainstream, nor should it be. Some learning technologies, either already in use or emerging, have the capacity to improve student success significantly. The OER movement’s almost singular focus on cost can obscure the larger objective -- actually getting more students through to graduation while ensuring that they’ve learned (and enjoyed learning) something along the way.
The risk for the OER movement is that it unwittingly reinforces the kind of resource disparities we see everywhere else in our society: a situation in which the well-off enjoy content with the latest technologies and practices, and the not-so-well-off manage without them. To be sure, OpenStax partnerships with third-party technology partners are a recognition of this need, but these relations are still established within the traditional publisher/tech partner binary model, with the difference that the core content is low-cost or free. As important as that project is, it doesn’t yet realize the promise of OER as disaggregated high-quality content created and modified from anywhere.
A better way forward is to compensate the stakeholders -- faculty, copyright holders, and technologists, principally -- for their contributions to the OER ecosystem. This can be done by charging students nominally for the OER courses they take or as a modest institutional materials fee. When there are no longer meaningful costs associated with the underlying content, it becomes possible to compensate faculty for the extra work while radically reducing costs to students. While I launched a new venture to do this, what’s needed are lots of entities -- for-profit and nonprofit -- to experiment with funding models. It’s all achievable and there will likely be no single way to accomplish it.
From this will emerge a new breed of courseware, one that preserves the low cost and flexibility of open content while embracing learning technologies that support faculty and student success. Certainly such a model involves costs, though not so much for the content as for the tools that improve its use and for the people on the ground who are actually doing the work of curating and adapting materials. Align the incentives in the right way, and this model of for openness can empower faculty members and institutions in unprecedented ways. It will encourage local innovation so that, over time, the courseware, now unlocked and financially supported, becomes an expression of the teaching itself.
Openness, then, lends itself to a new order of distributed content development that includes outstanding learning technologies; I think all the bloggers mentioned above recognize this. But precisely because instructors are not hackers and belong to an entirely different community of practice, a system for distributed content development also needs to be accompanied by a system of distributed financial incentives. When this all comes together -- and it will -- then courseware will escape commodification and become a creative and low-cost force in education. Only then should we begin to count the percentages.