Ruling on copyright fair use will hurt professors, students and publishers (essay)

In Friday’s decision in Cambridge University Press v. Patton, the U.S. Court of Appeals for the Eleventh Circuit followed decades of jurisprudence in casting aside bright line rules for determining whether faculty made fair use of copyrighted material. This is regrettable, as the celebrated 2012 district court opinion in the same case had opened up the possibility of teaching faculty how to properly make fair use of material using plain terms and easy-to-understand concepts, while the appeals court opinion returns us to the days of case-by-case holistic analysis and detailed exceptions, loopholes, and caveats.

The case revolves around a challenge by several companies that published non-textbook scholarly works to Georgia State University’s electronic reserve systems, wherein faculty and librarians would scan in excerpts of books for students to access digitally, a technological improvement over the traditional practice of leaving a copy or two on reserve at the library circulation desk. The publishers claimed mass copyright infringement while Georgia State cited the fair use provisions of Section 107 of the Copyright Law. 

The district court exhaustively analyzed each work uploaded to electronic reserves, finding only five in violation out of the dozens submitted by the publishing companies, by taking a new twist to the law’s four factors for analysis:

  1. The purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
  2. The nature of the copyrighted work;
  3. The amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
  4. The effect of the use upon the potential market for, or value of, the copyrighted work.

Traditional fair use analysis calls for a case-by-case analysis of each potential use, independently weighing the four factors holistically, which is difficult and often requires knowledge of unavailable facts (such as the effect on the market of the work, which is nearly impossible for those outside of the company to guess at). (For instance, the Supreme Court in Campbell v. Acuff-Rose Music, Inc. specifically discarded any use of “bright line rules” for determining fair use of copyrighted material.)

Judge Orinda Evans went a different route. She found that de minimis use (such as when a faculty member posts a work but no student ever accesses it) is not a violation, and that in most cases, using one chapter or 10 percent of a book that is under copyright protection would meet the fair use test. The judge decided to clearly assign winners in each of the four factors, and then give the overall win to the party with the majority of factors in their favor.

She wrote that factors one and two almost always went in favor of nonprofit higher educational use of academic works. While a determination of factor four may be difficult for a faculty member to determine, and would likely go in favor of the publishers, the judge ruled that 10 percent or one chapter of a work that is digitally available would meet the fair use test for factor three. Adding factors 1, 2, and 3 together let her find a majority and, thus, fair use, even without factor four.

Note that these findings were for those works that could be purchased digitally. In another section, the judge applied some behavioral economics to factor four by finding that for those works that a publisher did not make available digitally, a faculty member could use approximately 18 percent of the work and still win a fair use analysis.  That larger limit of factor 3 could encourage publishers to make their works available at reasonable prices, so as to discourage fair use without remuneration.

This was a groundbreaking opinion that allowed intellectual property lawyers in higher education to clearly explain to administrators and faculty members which uses would and would not be fair. Rather than require our botany and geography professors to also become copyright scholars, we could provide them with reasonable tests to ensure they properly balanced the interests of students in accessing the content with the interest of publishers in compensation for developing the content. While this wasn’t the first effort to develop fair use standards, it was the clearest, and the first time that such standards were set by a court.

The appeals court rejected this analysis and found that the “District Court did not err in performing a work-by-work analysis of individual instances of alleged infringement in order to determine the need for injunctive relief. However, the District Court did err by giving each of the four fair use factors equal weight, and by treating the four factors mechanistically.”

The appeals court instead called for a return to the holistic analysis. Rejecting the 10 percent or one chapter bright-line rule, the appellate court wrote that “the District Court should have performed this analysis on a work-by-work basis, taking into account whether the amount taken -- qualitatively and quantitatively -- was reasonable in light of the pedagogical purpose of the use and the threat of market substitution.”

The appeals court decision stands on solid precedential ground, and it is not the first court to call for a holistic and case-by-case analysis. While one can defend that decision by looking to the past, the decision is a poor one for those who look to the future. As content becomes more available in varying formats, and our faculty, staff and students are faced with myriad opportunities to pay for content, make fair use, or violate copyrights of authors and creators, the presence of clear standards and easily digestible rules provided higher education with a fighting chance to educate our academic community and encourage proper balancing and fair (but not inappropriate) use of content.

William Patry and Melville Nimmer, the two seminal thinkers in copyright law, each devote hundreds of pages to explaining copyright law. Their sets of volumes, which cost thousands of dollars, provide a comprehensive analysis of fair use and all of its details. But these books and detailed analysis are well outside the scope of what we expect of our faculty members who do not specialize in intellectual property, and our instructors simply do not have the time to conduct an exhaustive analysis of each use, even if they did take the time to learn all the permutations of the fair use analysis. This isn’t to say that they can’t, but to state the reality that they won’t.

Frankly, the dueling decisions in these cases, and the numerous articles and statements by serious copyright scholars on both sides of this analysis, show that even those who steep themselves in the details of fair use can disagree on whether a certain use is fair or violative.

When intellectual property law experts cannot agree, we should not expect our history and math faculty to do justice to the fair use analysis each time. 

Instead, faculty will divide into two camps.  One group will “throw caution to the wind” and use whatever content they wish in whatever form they desire, hoping never to raise the ire of the publishing companies. 

The other, out of an abundance of caution, will self-censor, and fail to make fair use of content for fear that they might step over a line they cannot possibly identify, and can never be certain of until a judge rules one way or the other.  Either way, our students and the publishers lose out.

The district court opinion shed some light into the murky swamp of fair use analysis. The Eleventh Circuit opinion dims that light, and threatens to return us to a regime wherein faculty who are not experts in copyright law will either use without consideration of the law or self-censor, diminishing the utility of the concept of fair use.

The Constitution teaches that the purpose of copyright is to “promote the Progress of Science and useful Arts.” The district court opinion found that small excerpts available to students “would further the spread of knowledge.”

Arming faculty with clear rules and standards to properly balance fair use of content would go a long way toward achieving this goal.

Joseph Storch is an attorney in the State University of New York Office of General Counsel. The views expressed here are his own.

Open educational resources movement needs to move beyond voluntarism (essay)

The dominion of open educational resources is apparently looming large, if one were to judge by a blog thread touched off with a panel discussion at a recent Knewton event. David Wiley, participating in the panel, made the bold claim that “in the near future, 80 percent of textbooks would be replaced by OER content.” Jose Ferreira responded critically to that view a few days later with a blog post, to which Wiley offered a dissenting reply. Michael Feldstein then weighed in with a dissenting perspective of his own.

It’s a spirited and fruitful discussion; well worth a read. Their comments, though, didn’t tackle what I’ve come to see as the core issue for the OER movement, a foundational assumption that has crimped its progress. The assumption holds that because open-source educational content is like open-source software -- in that it’s free content that you can chop up, remix, and share with anyone -- its application and uses should follow in a similar way.

The short history of the two movements makes clear that this is not the case. As David Wiley points out, the first openly licensed educational materials were published more than 15 years ago, around the time that Linux led the movement of open-source software (OSS) into the mainstream. So why did one open-source movement take off as the other tarried on the margins, championed only by the most stalwart advocates?

While Linux has long been part of standard practice, and our daily computing lives would be unthinkable without open-source software, more than 90 percent of faculty textbook adoptions in the U.S. are still locked-down, expensive commercial materials. Most don’t doubt the unsustainability of the present course (including most publishers), but it’s also plain to see that the OER movement had not yet offered a truly satisfying alternative. The failure of OER to become mainstream at this point is only underscored by the myriad forces working in its favor: economic pressures, greater administrative accountability, government oversight and budget cuts, and a truly broken publisher model.

A clear reason for the different trajectories is the commercial support that OSS has enjoyed, and that OER has not. Contrary to the common view that OSS has advanced largely through loosely organized communities of volunteers, it’s actually often strongly supported through private enterprise. More than 80 percent of the contributions to Linux, for example, come today from companies like Google and Samsung. But the success of OSS isn’t simply through commercial appropriation. Instead, companies were able to support OSS because they were building on an already-present foundation of voluntarism in the hacker community. While a volunteer community of course exists in OER, it does not have the depth and breadth of its OSS counterpart. The voluntarism of the hacker community does not, in other words, map well onto the community of academic instructors.This situation isn’t an accident of history but reflects a fundamental difference in the roles and self-understanding of each group.

With OSS, the hacker is often an end user but more centrally the creator and modifier of code. And to the extent that hackers form a community, it is a community of problem-solvers addressing issues that concern their work directly. In his seminal book on hacker open-source culture, The Cathedral and the Bazaar, Eric Raymond suggests that “Every good work of software starts by scratching a developer’s personal itch.” Contrast this with the relationship faculty have to the educational content they use: for most, it’s a tool for teaching a class, a means of supporting an activity that is largely extrinsic to the tasks of creating and modifying pedagogical content. Most instructors are not editors, let alone creators of their classroom content; they are simply end users.

If there’s a personal itch to scratch at all, it’s usually in the area of original scholarship and research, not teaching materials (let’s recall that the Internet was born to share research, not lesson plans). For most instructors, the textbook is a convenient package, without which the task of managing a class would be that much more laborious. Commercial publishers have long recognized what the OER movement has not: that often-overworked and underpaid instructors are looking to content and course technology to make their lives easier, not to take on the additional responsibility of managing their own content without financial recognition for that labor. Unlike the open-source hacker, the thrill of belonging to a community of problem-solvers of content simply isn’t their thing. To truncate an otherwise large topic, instructors are not hackers and that changes everything. Or it should have for the OER movement.

The recent gains of, and the growing prospects for, OER are, in fact, a tacit acknowledgement of this difference. No doubt the single biggest success to date for the movement is the OpenStax project, but this success breaks any illusion that the practice of OER is analogous to that of open software. Connexions, the OpenStax predecessor project at Rice, languished for years as an open-source content platform until Rice hired Joel Thierstein as associate provost to turn the project around. What did he do? Thierstein, who previously worked in the private sector developing content for the telecommunications industry, had a simple and very powerful idea: raise grant money to hire the same companies that ghostwrite textbooks for the traditional publishers, and then release the texts into the public domain under the most open license available.

As commercial textbook equivalents, their use required no behavioral changes for faculty. They would not be “learning objects” or fragments that required additional faculty work. Faculty could use them as teaching tools, just as they would conventional content, except, in this case, they’re free. Like the commercial publishers, Thierstein rightly understood that faculty want an easy and straightforward way to adopt high quality and appropriate content. Thierstein’s success enabled Rice to go forward with additional fund-raising and the Connexion’s rebranding as OpenStax. A simple idea has had a significant impact.

And yet for all the success of OpenStax, it’s also clear that a free version of a commercial text will never alone be sufficient for OER to reach the mainstream, nor should it be. Some learning technologies, either already in use or emerging, have the capacity to improve student success significantly. The OER movement’s almost singular focus on cost can obscure the larger objective -- actually getting more students through to graduation while ensuring that they’ve learned (and enjoyed learning) something along the way.

The risk for the OER movement is that it unwittingly reinforces the kind of resource disparities we see everywhere else in our society: a situation in which the well-off enjoy content with the latest technologies and practices, and the not-so-well-off manage without them. To be sure, OpenStax partnerships with third-party technology partners are a recognition of this need, but these relations are still established within the traditional publisher/tech partner binary model, with the difference that the core content is low-cost or free. As important as that project is, it doesn’t yet realize the promise of OER as disaggregated high-quality content created and modified from anywhere.

A better way forward is to compensate the stakeholders -- faculty, copyright holders, and technologists, principally -- for their contributions to the OER ecosystem. This can be done by charging students nominally for the OER courses they take or as a modest institutional materials fee. When there are no longer meaningful costs associated with the underlying content, it becomes possible to compensate faculty for the extra work while radically reducing costs to students. While I launched a new venture to do this, what’s needed are lots of entities -- for-profit and nonprofit -- to experiment with funding models. It’s all achievable and there will likely be no single way to accomplish it.

From this will emerge a new breed of courseware, one that preserves the low cost and flexibility of open content while embracing learning technologies that support faculty and student success. Certainly such a model involves costs, though not so much for the content as for the tools that improve its use and for the people on the ground who are actually doing the work of curating and adapting materials.  Align the incentives in the right way, and this model of for openness can empower faculty members and institutions in unprecedented ways. It will encourage local innovation so that, over time, the courseware, now unlocked and financially supported, becomes an expression of the teaching itself. 

Openness, then, lends itself to a new order of distributed content development that includes outstanding learning technologies; I think all the bloggers mentioned above recognize this. But precisely because instructors are not hackers and belong to an entirely different community of practice, a system for distributed content development also needs to be accompanied by a system of distributed financial incentives. When this all comes together -- and it will -- then courseware will escape commodification and become a creative and low-cost force in education. Only then should we begin to count the percentages.

Brian Jacobs is founder & CEO of panOpen.com.

Editorial Tags: 

Textbook prices still crippling students, report says

Smart Title: 

Report, citing crippling textbook costs, argues for more use of open-source materials. Publishers dispute the findings.

Textbook prices more transparent but still high

Smart Title: 

Publishers, bookstores and professors are complying with federal provisions to make textbook pricing more transparent for students. But prices are still high.

Company to help institutions embrace open educational resources

Smart Title: 

Open education advocates launch Lumen Learning, which aims to help institutions replace expensive textbooks with open-source solutions.

Colleges try to beat textbook costs with book reserves

Smart Title: 

To lessen the impact of rising textbook costs, three institutions have created programs that allow students to borrow course materials.

OpenStax announces first iPad version of its free, online textbooks

Smart Title: 

OpenStax College, an open-access textbook publisher, introduces its first offering through iTunes -- and hopes the $4.99 charge will allow students to benefit from extras and the business model to grow.

CourseSmart announces analytics program to measure student engagement

Smart Title: 

The e-textbook consortium CourseSmart has announced an analytics program that will provide data about students' usage of a text, which universities can use to link with outcomes.

Pearson unveils OER search engine

Smart Title: 

Publishing giant unveils search engine for open educational resources -- and its own content.

Survey: iPad adoption sluggish but e-textbooks booming

Smart Title: 

Apple iPads are gradually catching up to their hype on four-year campuses; e-textbooks make inroads as well.


Subscribe to RSS - Textbooks
Back to Top