In 2002, a year or so after I assumed the Director of IT Policy role at Cornell, I reviewed the DMCA notices the university received. Also the DMCA officer for the university, I had worked with university counsel about our liability. At the time, so long as we were the conduit, and not serving the data, we had none, essentially, except for the case of “repeat offenders,” which remained undefined in both statutory and case law, and poorly articulated in the original. It stated that “repeat offenders” would be terminated … perhaps a Freudian slip of the content owner’s tongue who purchased the statute through lobbying, but more clearly to be understood as the “the account” or “the subscription” of repeat offenders …
That year Cornell established the procedures for managing DMCA notices. One process was for when a “Cornell computer” served content and the other for when the network acted as a conduit. The former was a safe harbor; the latter for the purpose of educating students about the nexus of technology and law that placed them in harm’s way. Later, in 2008, after the passage of the Higher Education Opportunity Act, the Department of Education, placed in charge of regulations under the law, established four means by which a campus I.S.P. (which wanted to avoid threats to federal funding, including financial aid for its students) could mitigate liability. Doing “something” with DMCA notices became the least invasive technical choice, and it was modeled on the process we had created at Cornell.
So it is with interest that I read Jeffrey Toobin’s article “The Solace of Oblivion” in this week’s New Yorker. Focused on privacy, and the European Court of Justice’s decision to support the “right to be forgotten,” your first thought might be “what does that decision have to do with the DMCA?” The answer falls in both procedural and substantive law. The first is that the process that Google, which has 90% market share for search engines in Europe, is somewhat similar to that which they established for DMCA notices … a procedure not unlike that which we created at Cornell over 12 years ago. And the second is that in any number of cases in the United States particularly, where the right to free speech is prized over privacy, copyright has been used as the vehicle to remove content when privacy actions fail. Witness the Diebolt voting machine scandal and DMCA notice at Swarthmore in 2003 (which Diebolt eventually lost when the court ruled that they abused their copyright, but it was effective in the moment for having Swarthmore take down the content). Copyright was also the vehicle for the recent Apple controversy of famous people photographs. Either the stars or their friends had taken the photographs, hence creating the ownership that resulted in DMCA notices that brought them down. History is nothing if not ironic.
Free speech, copyright and privacy are on a collision course in American law and culture. Richard Katz, borrowing from the language that Justice O’Conner once used in an abortion case concerning constitutional privacy law for family planning and technologies that decreased the gestation time of a fetus in his or her mother’s womb, made that point recently in moderating a presentation that we did for a WCET webcast on “Law and Disorder: Laws Disturbed by the Internet and What Is to Be Done About Them.” I will be giving that same talk to an audience at the University of Buffalo this Wednesday for the Digital Challenges Series Sponsored by UB Libraries and UB Information Technology, co-sponsored by the CEISARE, Office of Educational Innovation and Assessment, and Student Life.
What do you think about this “collision” of law and society in such important areas of life and jurisprudence?
Read more by
Opinions on Inside Higher Ed
Inside Higher Ed’s Blog U
What Others Are Reading