You have /5 articles left.
Sign up for a free account or log in.
There’s another fascinating study out from the Stanford History Education Group, the folks who studied high school and college students’ capacity to figure out what news is fake, finding that they don’t really know how to do that. Turns out – surprise! – trained historians don’t really know how to do that, either. Historians tend to focus on critiquing textual evidence, unlike trained fact checkers who immediately confirm and corroborate with other sources, something the report calls “reading laterally.” No doubt historians would have gotten the answers eventually, but on a timed test, close reading didn’t work as well as lateral reading. We rely too much on training and trust.
This reminds me of an article [corrected link] Marc Meola published more than a decade ago, urging librarians to stop using a source evaluation checklist with students (CRAAP is a classic – determine currency, relevance, accuracy, authority, purpose) without also including the lateral part. Can you confirm claims in another source? How does this source fit in with what you’re finding elsewhere? What do experts say? If you don’t already know that a well-dressed website or an article full of footnotes is a front for a fringe group known for playing fast and loose with facts, the site (or the article) certainly won’t let you in on it. So read around. See how this information fits with other sources.
But “other sources” can be tricky, too, if we don’t get out of our social silos. As Kris Shaffer explains in a handy article listing “10 Ways to Get Started Fighting Internet Propaganda,” we tend to trust people we know, and social media is built on trust relationships. We may miss ideas that aren’t entertained by our friends and friends-of-friends. We might also come across disinformation that has been alchemically transformed into misinformation when a friend of a friend of a friend innocently shared something that started out as malicious propaganda but somewhere along the line was taken on its face. We can’t spend hours checking out everything we encounter. Fact-checking has to be a collaborative and social process. Learn who to trust, look for patterns, and make sure you don’t share something you aren’t sure is true. (Some of Shaffer’s ten ways involve APIs, scraping websites, and analyzing multiple network relationships using Python, duct tape, and magic spells. He lost me after the first five or so, but even halved it’s a good set of strategies.) We have to somehow identify the limits of our networks, read laterally, and practice humility when we aren’t sure.
Okay. But what I’m wrestling with is helping students learn how to make up their minds when there’s so much information coming at us and so many machines are trying to make up our minds for us. A few stories from the past couple of weeks make me wonder what we can possibly mean by “information literacy” given the way we live now.
- Twitter, which hasn’t yet made any money, knows it’s overrun by bots and won’t confront the problem because it could hurt its already shaky business model. Twitter’s libertarian business philosophy makes it both easy to manipulate and resistant to accountability. It also gives shady operators the opportunity to create countless fake accounts and erase its activity at will, thwarting anyone who wants to know what's going on. (“Why Twitter is the Best Social Media Platform for Disinformation” by Thomas Rid at Motherboard.)
- Where do you turn for the latest news, if not to Twitter? Years ago Google became so dominant it became a verb meaning “to find out.” In the hours journalists take to craft the first draft of history, propagandists and mischief-makers rush in with garbage. We may get impatient waiting for the story (how many times can CNN anchors babble about what they don’t know and haven’t confirmed?) but Google gives you answers right away. They probably came from Twitter, and they may be parodies or hoaxes or serious attempts at misleading people. In the absence of good content, Google takes what content it can get, and so we learn all kinds of wrong things about breaking events. (“Google’s Mass-Shooting Misinformation Problem” by Alexis Madrigal at The Atlantic.)
- A pox on these platforms! you may think. I just won’t use them. That won’t stop you being used by them. I don’t have a Facebook account, but Facebook knows all kinds of information about me because they buy personal information from data brokers, they know where I’ve been when I visit a page with a “like us on Facebook” button, they use facial recognition to identify me on photos people upload, and they use their “people you may know” algorithm to scoop up information from contacts people have for me on their laptops or phones. That’s an extraordinary amount of personal data, and it’s not mine. I don’t have a Facebook profile, but they can profile me anyway. Only those who sign up and create profiles can edit them, and even they can’t see the bits that were added because of the “people you may know” algorithm. Why can’t we control that information about us? Facebook claims that would be a privacy violation. Sigh. (“How Facebook Figures Out Everyone You’ve Ever Met” by Kashmir Hill at Gizimodo.)
I’m getting ready to meet with yet another group of first semester students. They’ll spend a little time exploring the library and learning some things about the internet and how to recognize the kinds of information they’ll be expected to use in academic writing for the next four years. I hope they learn something valuable, but I’m feeling sheepish about calling this sort of learning “information literacy.” I’ve been doing this for thirty years. Though technology has altered libraries and publishing in those decades, until this year I thought the fundamentals students needed to know – how to frame a question, how to think critically about what you find, how to weigh divergent arguments and create your own with a sense of integrity – were basically unchanging. But the world we’ve found ourselves in now, one where we’re being given personalized bodies of knowledge created by propagandists and bots and artificial intelligence, all locked up in corporate black boxes – I don’t even know where to start.