In The Checklist Manifesto, Harvard surgeon and New Yorker contributor Atul Gawande shows the device’s power to reduce surgical complications by making sure sound protocols are followed. Gawande sees the checklist as a tool to reduce complexity to a list, lessening the demands on our memories and our reliance on instincts.
Some of the findings of Gawande’s book seem obvious. A checklist to ensure the proper procedures for changing a central line helps significantly reduce infections. But other, seemingly tangential steps also had significant effects, including introducing everyone in the operating room so they know each other by name. As Gawande told NPR: "Making sure everybody knew each other's name produced what they called an activation phenomenon. The person, having gotten a chance to voice their name, let speak in the room — were much more likely to speak up later if they saw a problem."
Teaching writing often involves utilizing checklists. I provide an editing and polishing checklist to students for “finishing off” their paper that includes items like reading the paper out loud to themselves, and doing a search for common homonym issues like there/their/they’re. When these errors occur I can go back to the checklist and ask if students followed the guidance.
I’ve also long taught a checklist (not of my own design) for working with secondary sources, the “CRAAP Test.”
There are checklists inside the checklist, all written as points of inquiry, such as “When was the information published or posted?” and “Are the links functional?” under “Currency.”
I’ve long thought this to be a useful framework to help students sort through information they may want to incorporate into their writing.
I'm not so sure about that anymore.
Many may recall the research out of Stanford demonstrating the inability of students to effectively “reason about information” on the Internet, calling students “easily duped.”
Of course, the easily duped are not confined to students.
Recently, a picture of Seattle Seahawks player Michael Bennett dancing in the Seahawks locker room while burning an American flag, surrounded by cheering teammates and coaches was brought to prominence through conservative outlets, including the “Vets for Trump” Facebook page. Thousands of comments condemning the action were posted prior to the Vets for Trump account removing the post.
The picture (which I will not link to, but is easily found) is obviously faked, a truly amateur Photoshop job. If that wasn’t enough, the barest inspection and minimal thought should elicit skepticism.
Even as people arrived to debunk the photo, some remained unconvinced, including those who were certain that even if it wasn’t accurate, the photo reflected a larger “truth.”
Obviously no checklist can overcome a lack of motivation to get to the genuine truth. People who want to believe Michael Bennett “hates America” will believe it no matter how much evidence they’re shown to the contrary.
Lack of motivation – of a perhaps slightly different stripe – explains the failure of my editing checklist to result in perfectly polished papers.
Students have to be motivated to apply the process. Some are because of grades, or internal, perfectionist tendencies, but in the past, even when I have used a pass/fail (redo) metric where “fail” requires redoing until passing, and certain errors are automatic redo, some students would take four or five cracks to clear the bar. Over time, I’ve found that designing assignments with more inherent intrinsic motivation seems to work better than swinging the grade cudgel above student heads.
But the Stanford researchers reveal a more significant problem when it comes to evaluating online media, namely that checklist tools like the CRAAP Test, which primarily ask students to closely read and examine the sources under question, are not only not sufficient, but may be actively misleading.
Digital learning specialist Mike Caufield explains why the old checklists don’t work. He observes that tools like the CRAAP test are “heavily domain dependent, not based on skills, but on a body of knowledge that comes from mindful immersion in context.”
As we gain experience, we come to “know” things, including which sources may be reliable, as well as the ways sources that seem reliable on the surface may be biased. The CRAAP test is an effort to codify what sophisticated source evaluators are doing, except it isn't an accurate reflection of the knowledge and condtions under which we work.
Experienced academics don’t actually use the CRAAP test. The CRAAP test is an approximation of a much more sophisticated process rooted in that domain knowledge. Giving it to students as a substitute for knowledge may not be doing them any favors.
In fact, many students in the Stanford study applied checklists like the CRAAP test and came up with incorrect assumptions about the reliability of the sources anyway.
This is when I begin to panic and think that we may have an unsolvable problem on our hands, but Caufield and the Stanford researchers point us in a different direction.
Rather than training students to evaluate sources – as has been our practice for generations – we need to adapt and help them not understand sources, but the “web” itself. Caufield argues for a “web literacy,” that “starts with the web and the tools it provides to track a claim to ground.”
To ferret out truth, the web is something to be utilized, not evaluated.
For Caufield, this involves a process:
1. Check for previous fact-checking work: Someone (like Snopes) may have done the legwork you need to satisfy questions about a dubious claim.
2. Go upstream to the source: Follow the links until you get to the root of the claim and then evaluate that source.
3. Read laterally: Leave the site to understand more about the site.
These are behaviors that help people build the necessary web literacy. Working the process increases the individual's familiarity with and knowledge of the domain within which they're working. That knowledge accrues in a way that creates expertise. It is a model for learning.
Checklists cannot substitute for literacy, and yet we often employ checklists in education, perhaps considering them a shortcut or pathway towards literacy, but how often is this the case?
Think of the “plug and chug” method generations have been taught in dealing with the quadratic equation. I learned this, apparently well-enough to survive math, and yet as I sit here today, I have no idea what the quadratic equation is for, let alone possess the ability to make use of it.
Even worse, I believe that there’s times where checklists provided to students in order to facilitate “achievement” actively mask illiteracy.
Checklists, often in the form of rubrics, and also often tied to high stakes assessments are ubiquitous in the teaching of writing. I have used somewhat more sophisticated checklists for different assignments throughout my career.
These thoughts are making me uncomforable. I'm not sure I want to connect my own dots.
But what if checklists are only appropriate for things we’ve already learned, but are prone to forget, like medical professionals practicing the proper pre-surgery practices? What is the impact of checklists on people who have not yet developed disciplinary instincts, or who have yet to experience the full complexities of their field?
What happens both in terms of motivation and curiosity when we substitute checklists for literacy and knowledge?
I’m a little afraid of the answer to that last question. I’m going to explore it more next time out.
 A fire of the size and intensity in the picture would trigger indoor sprinkler systems. A football team celebrating victories by ritually burning the flag would be a major story not confined to a partisan Facebook feed. Even those working on the racist assumption that Bennett must be a habitual flag burner because…reasons, could note Seahawks coach Pete Carroll in the background, apparently cheering the act. This phenomenon is not unqiue to conservatives. A recent tweet from an obvious David Letterman "parody" account about ending "white privilege" went viral, with many apparently believing they were the thoughts of the real David Letterman.
 I strongly encourage anyone who’s curious to read this handy summary of the Stanford findings by the researchers themselves. For space reasons I can’t go into all their examples, but they’re very interesting.