Last year I had the opportunity to attend the Museum of Modern Art’s exhibition of Andy Warhol’s work, which included a retrospective of his use of appropriation. While Warhol is well-known for work such as his “Campbell’s Soup Can” (1961) series and his silk screen painting “Flowers” (1964), he is not alone in borrowing or recontextualizing an object or image and presenting it to an audience to view in a new way. Artists from Pablo Picasso (“Guitar, Newspaper, Glass, and Bottle,” 1913) and Marcel Duchamp (“Fountain,” 1917) to Sherry Levine (“After Walker Evans,” 1981) and Kathleen Gilje (“Bacchus, Restored,” 1992) have all practiced appropriation.
These visual artists are not alone. Appropriation is also standard practice in other arenas including poetry and music. But while it is embraced in some sectors, it is still almost universally frowned upon in academia. In part this revulsion stems from a deep-seated concern that the line between appropriation and plagiarism is so fuzzy that allowing the later would put us on a dangerous and slippery slope. The concern is legitimate. After all, there is no greater sin in the academy than plagiarism.
I am not alone in suggesting that the academy is a bit behind in terms of at least engaging in this dialogue. For a few years Kenneth Goldsmith has taught “Uncreative Writing” at the University of Pennsylvania. In the class he “instructs students to appropriate, replicate and plagiarize in creating their own compositions.” Much like he did in his well-known books “Day” (2003), “The Weather” (2005) and “Traffic” (2007), in the class he requires students to practice sampling (whether from the phone book, newspaper, the work of other writers, and so on) as opposed to pursuing originality. He defends his unconventional approach by arguing not only that when they are forced to examine and defend their choices you find they are both as “original” and “unique” as when they are engaged “in traditional forms of writing,” but moreover in the modern world we are all “faced with a situation in which the managing of information has become more important than creating new and original information.”
Goldsmith has been taken to task in some quarters for his unconventional approach, but he has also been praised. Whatever your view of Goldsmith, he raises several important questions such as, do students today need to be at least as skilled at what he calls “pointing” as they are at “making”? Is it possible that the best writers may not be those who are the most original but those most able to manage information? In the future, will “information management” be a necessary skill for anyone hoping to write and conduct research? Are we in the academy missing the boat when we fail to instruct our students on how to make appropriate choices and defend their selections as meaningful?
Part of Goldsmith’s broader argument is that new technology has changed the way we work; for academics a large part of that work is researching and writing. While one of the most vivid memories I have of my college years is the time I spent in the library crouched in the stacks looking for books and other source material, how many students today will have that memory? Very few, if any. Now students – like the rest of us – do most of their research on-line. And while the digital age has opened up a world of possibilities, it also presents its own set of challenges. In their study, “Truth Be Told: How College Students Evaluate and Use Information in the Digital Age,” Alison Head and Michael Eisenberg found that one of the major difficulties students face conducting research today is information overload. According to the findings, the vast majority of students begin their research by using search engines such as Google or Bing and quickly become overwhelmed with the amount of data they get in return. They struggle with how to navigate through the morass of information and how to distinguish the relevant from the immaterial. In short, they struggle with figuring out what to “point at” and what to reject.
Unfortunately, they won’t get much help in this regard from the academy, however, since few of us teach students how to ‘point’ or ask them to defend their choices as meaningful. Interestingly, anyone who has been in the classroom for the last several years knows this is exactly what they are doing. They are just doing it ‘underground’ without any assistance from faculty or any requirement to defend their choices. And of course, they will not admit it for fear doing so will put them at odds with their institutions plagiarism policy (policies themselves which – if you read them - are suspiciously similar; an industrious student might inquire as to whether those appointed to draft them first used the internet to find at examples from other institutions and used those to inform their own?)
It may be time to re-think our outright rejection of appropriation and at least begin a broader discussion about (a) whether there are real differences between plagiarism and appropriation? (b) if so, what are they? (c) under what circumstances may the later be used properly? and most importantly (d) how can we begin to begin to address these issues pedagogically? The point is not that we should all begin teaching a variant of Goldsmith’s ‘uncreative writing’ class, but rather that we should at least begin a conversation on whether there are differences between plagiarism and appropriation. If so, has the technology advanced to the point where we owe it to our students to begin to figure out how best to incorporate this new understanding into our work in the academy?
Jeanne Zaino is Professor of Political Science & International Studies at Iona College, New Rochelle, NY.
Read more by
Opinions on Inside Higher Ed
Inside Higher Ed’s Blog U
What Others Are Reading