• Just Visiting

    John Warner is the author of Why They Can't Write: Killing the Five-Paragraph Essay and Other Necessities and The Writer's Practice: Building Confidence in Your Nonfiction Writing.

Title

Guest Post: Ed Tech's Collision Course

Are transparency, accuracy and honesty all fair game for ‘disruption’?

August 27, 2017
 
 

While John Warner takes a well-deserved vacation, he’s asked me to keep the lights on here. I’m Susan Schorn, the Writing Program Coordinator in the School of Undergraduate Studies at the University of Texas at Austin, and author of, among other things, Smile at Strangers: And Other Lessons in the Art of Living Fearlessly.

Since John posted this thought- (and comment-) provoking analysis of StraighterLine prior to his break, I thought I'd bring up another values conflict related to educational technology. I’ve weighed in on plagiarism detection systems before, specifically on research showing that they’re remarkably bad at the one task that justifies their existence: detecting replicated text . But as the administrator of a large cross-disciplinary writing program, I’ve also seen how plagiarism detection products like TurnItIn, PlagScan, and others can actively hinder students from developing a professional understanding of academic honesty.

In their book Who Owns This Text? Plagiarism, Authorship, and Disciplinary Cultures, Carol Haviland and Joan Mullin document the different ways disciplines interpret intellectual ownership and fair use. Real-world knowledge-building communities develop shared expectations about when, why, and how to cite other voices, and they continually re-examine those expectations. For example, look at the way the Society for Industrial and Applied Mathematics (SIAM) adjudicates accusations of plagiarism. They begin by acknowledging that "there is no single accepted definition" for plagiarism, but that it "arises in a range of forms that vary widely in ease of identification." The website continues:

SIAM's assessment of whether an inadequacy of citations constitutes plagiarism will involve questions such as:

1. Does the omission of citations give a false or misleading impression that the author is the originator of the relevant results?

2. Was the author aware of the work that he/she omitted to cite?

3. Are results in the omitted citations essential to the work presented in the author's paper? Are the results in question regarded as common knowledge in the SIAM community?

This nuanced approach makes sense given that the community creating the scholarship SIAM publishes must trust and build upon one another's work. Similar questions are asked daily by writers in all professions as they ponder what to cite or not cite. The answers depend on their awareness of reader expectations, their knowledge of the history of ideas in their discipline, and their ability to discern what is and is not "essential" to their claims.

Unfortunately, while real-world determinations of plagiarism rest on the opinions of the people producing and consuming the text being judged, we’re not especially good at teaching students this social dimension of “fair use.” Instead, we tend to present students with legalistic, always/never definitions of plagiarism. Plagiarism detection software, by its design, marketing, and use, compounds this problem.

In the first place, if human experts with deep disciplinary knowledge must question and confer to decide whether plagiarism has occurred, we shouldn’t expect generic software to accurately perform the same task. In fact, because plagiarism detection software produces so many false positives and negatives, it cannot accurately teach students what “counts” as plagiarism, and instead often increases their confusion.

Most companies selling plagiarism detection services caution that their systems’ findings should be “interpreted” by instructors. In practice, I can attest that these systems are almost universally presented to students as infallible arbiters of academic honesty, and that most faculty “interpret” as little as possible. Moreover, the “interpretation” the software requires is really more a process of pushing back against the simplistic narrative created by “originality scores,” red/green flagging systems, and similar elements that create what Edward Tufte might call a “cognitive style.” The structure and presentation of plagiarism detection systems effectively reduces plagiarism to a binary, teaching students that the definition is always cut-and-dried, in all contexts. In other words, plagiarism detection systems frame academic honesty as exactly the opposite of what Haviland and Mullin observed: decisions made by a community of professionals with deep disciplinary knowledge. Our teaching methods already tend to keep this largely tacit knowledge invisible to students. The use of plagiarism detection software obscures it further.

Ultimately, plagiarism detection services, like many other educational technology products I’ve tested, are built on, and propagate, values antithetical to academic honesty. Why, when all is said and done, do we place so much value on academic honesty? Why do we struggle to teach it to students? Why do we penalize them, even eject them from our institutions, if they fail to live up to the standard? As academics, we value transparency, the testing of data, and the lineage of ideas. We don't value those things simply because they’re righteous or beautiful or ethically pure. We value them because they make our everyday work possible. Without the ability to critique and test one another's findings, we cannot move ideas forward. Without knowing the provenance of a theory, we can’t connect it to related, evolving interpretations. Without agreed-upon processes to ensure honesty and accuracy, the entire project of expanding humanity's knowledge base and handing it on to future generations becomes impossible. Plagiarism detection systems erase these critical values, while claiming to measure our students' "honesty."

How many education technology companies freely share their data and outcomes? How many instead deem it “proprietary,” and insist that we simply accept the claims of the vague bar charts in their marketing materials? How often do they tout “disruption” as a goal, versus embracing a process of sharing results and reflecting on the resulting criticism? Companies seeking a slice of higher education funding often simultaneously present themselves as our "partners," while deriding our institutions as a failed social experiment that should be cleared from their paths like the rubble of a dead civilization.

Are you with us or against us, ed tech proponents? What are your values? Please declare your loyalties. And forgive me if we don’t simply take your word for it. We’re academics. We’d like to see some evidence.

Read more by

Be the first to know.
Get our free daily newsletter.

 

Back to Top