22 Thoughts on Automated Grading of Student Writing
I have some thoughts on automated grading of student writing. Most of them are unkind.
EdX, the online learning consortium of Harvard and M.I.T., believes it is close to a workable model for the automated grading of student writing.
According to Dr. Anant Argawal, President of EdX, “This is machine learning and there is a long way to go, but it’s good enough and the upside is huge. We found that the quality of the grading is similar to the variation you find from instructor to instructor.”
Since this news was released last week, I’ve been trying to respond in a coherent, essay-like piece of writing that ties together various thoughts and ideas into a cohesive and satisfying whole.
I’m giving up on that. This is the Internet, right? I’ve made a list.
22 Thoughts on the News that Automated Grading has Arrived
1. I’m willing to stipulate that if not today, very soon, software will be developed that can assign a numerical grade to student writing that largely gibes with human assessment. If a computer can win Jeopardy, it can probably spit out a number for a student essay close to that of a human grader.
2. On the other hand, computers cannot read.
3. No one who teaches writing, or values writing as part of their courses believes that the numerical grade is the important part of assessment. Ask anyone who’s taught more than a couple of semesters of composition and they’ll tell you that they “know” an essay’s grade within the first 30 seconds of reading. If that’s all I’m supposed to be doing, my job just got a lot easier.
4. Meaning, quite obviously, what is important about assessing writing is the response to the student author that allows them reflect on their work and improve their approach in subsequent drafts/future assignments.
5. I don’t know a single instructor of writing who enjoys grading.
6. At the same time, the only way, and I mean the only way to develop a relationship with one’s students is to read and respond to their work. Automated grading is supposed to “free” the instructor for other tasks, except there is no more important task. Grading writing, while time consuming and occasionally unpleasant, is simply the price of doing business.
7. The only motivations for even experimenting, let alone embracing automated grading of student writing are business-related.
8. Since we know that Watson the computer is better at Jeopardy than even its all-time greatest champions, why haven’t potential contestants been “freed” to do other things like watch three competing software algorithms answer Jeopardy questions asked by an animatronic Alex Trebek?
9. Is it possible that there are some things we should leave to people, rather than software? Do we remember that “efficiency” and “productivity” are not actually human values? If essays need to be graded, shouldn't we figure out ways for humans to do it?
10. There is maybe (emphasis on maybe) a limited argument that this kind of software could be used to grade short answer writing for exams in something like a history or literature course where key words and concepts are most important in terms of assessing a “good” answer.
11. Except that if the written assessment is such that it can be graded accurately by software, that’s probably not very good assessment. If what’s important are the facts and key concepts, won’t multiple-choice do?
12. The second most misguided statement in the New York Times article covering the EdX announcement is this from Anant Argawal, “There is a huge value in learning with instant feedback. Students are telling us they learn much better with instant feedback.” This statement is misguided because instant feedback immediately followed by additional student attempts is actually antithetical to everything we know about the writing process. Good writing is almost always the product of reflection and revision. The feedback must be processed, and only then can it be implemented. Writing is not a video game.
13. I’m thinking about video games, and how I learn playing them. For a couple of years, I got very into Rock Band, a music simulator. I was good, world ranked on multiple instruments if you must ask. As one moves towards the higher difficulty songs, frustration sets in and repeated attempts must be made to successfully “play” through one. I remember trying no fewer than 75 times in a row, one attempt after the other, to play the drum part for Rush’s “YYZ,” and each time, I was booed off the stage by my virtual fans. My frustration level reached the point where I almost hurled the entire Rock Band drums apparatus through my (closed) 2nd story window. After that, fearing for my blood pressure and my sanity, I didn’t play Rock Band at all for a couple of weeks. When I did, at last, return to the game, I played “YYZ” through successfully on my first try. Even with video games, time to process what we’ve learned helps.
14. The most misguided statement in the Times article is from Daphne Koller, the founder of Coursera: “It allows students to get immediate feedback on their work, so that learning turns into a game, with students naturally gravitating toward resubmitting the work until they get it right.”
15. I’m sorry, that’s not misguided, it’s just silly.
16. Every semester, I introduce my students to the diagram for a “rhetorical situation” an equilateral triangle with “writer,” “subject,” and “reader,” each at one of the points. With automated grading, I’ll have to change it to “writer,” “subject,” and “algorithm.”
17. What I’m saying is that writing to the simulacrum is not the same thing as writing to a flesh and blood human being. Software graded writing is like having intimate relations with a RealDoll.
18. How is that not obvious?
19. That MIT and Stanford, two universities of high esteem, are behind EdX and the automated grading nonsense, should cause shame among their faculty, at least the ones that profess in the humanities.
20. I’ve wrestled over including that last one. It seems possibly unfair, but I’m also thinking that it’s time to fight fire with something as strong as fire, and the only weapon at my personal disposal is indignation, righteous or otherwise. This is one of the challenges of writing, thinking of audience and making choices. This choice may anger some potential natural allies, but if those allies who must have a front seat to this nonsense aren’t doing anything, they can hardly be counted as allies.
21. I was encouraged by the reader responses in the Times article. They run at least 10-1 against the idea of automated grading of writing, and many of them are well-argued, and even respond to arguments offered by other commenters. It’s an excellent model for academic conversation.
22. The purpose of writing is to communicate with an audience. In good conscience, we cannot ask students to write something that will not be read. If we cross this threshold, we may as well simply give up on education. I know that I won’t be involved. Let the software “talk” to software. Leave me out of it.
Nobody grades Tweets, yet...
Read more by
Opinions on Inside Higher Ed
Inside Higher Ed’s Blog U
What Others Are Reading