You have /5 articles left.
Sign up for a free account or log in.
Back in May, my university -- like many Canadian institutions -- announced we’d be going online for fall. It was the right decision: students, faculty and staff have remained safe, in terms of the coronavirus. But as campuses across the continent and around the world have shifted to online learning, it’s become clear that institutions -- and many of us who work within them -- may be oblivious to a very different safety risk, one that’s been amplified in recent months.
The pandemic has fast-forwarded higher education’s entanglement in proprietary, datafied systems, but the sector has failed entirely to grapple with how data impact what we do.
Sure, most of us know at some level that we are fish swimming in increasingly datafied waters. Our devices track our searches and our locations and even our casual off-line conversations, pitching products we’ve just spoken of back to us on social platforms.
But the ways our pedagogies are shaped by platforms with terms of service (TOS) that were never written to be read by nonlawyers? The data-based decisions that get made about students’ lives and futures based on class activities we design? The race and gender biases built into the algorithmic tools that judge whether a student on a Zoom call or a proctored exam is appropriately attentive?
I’ve taught online for nearly 20 years, but these are relatively new issues on higher education’s radar as a whole. And as concerns multiply about privacy implications and exam-proctoring tools in our current mass online experiment, they are increasingly urgent ones.
What Do We Know About Data?
In summer 2020, graduate student Erica Lyons and I conducted a five-minute pilot survey of university educators -- in any type of teaching role, at any academic status position, anywhere in the world -- regarding their knowledge, practices and perspectives regarding data. In the brief window between the emergency remote teaching of March and April 2020 and September’s full-scale launch, we wanted to establish a baseline picture of how higher education instructors understand data and classroom tools.
The open data set from the survey is now published: a full paper is under submission and will come out in preprint soon; another is taking shape. The key takeaways are stark.
The survey was intentionally minimalist: it explored knowledge and practices through simple proxy questions and asked about people’s experiences and opinions regarding digital platforms and institutional data collection. Three hundred and thirty-nine responses were counted: 38 percent from Canada, 40.4 percent from the U.S. and the remainder from 25 different countries, including every continent except Antarctica. Eighteen point six percent of respondents had taught for fewer than five years, 19.8 percent for 21-plus years. The remainder were relatively evenly distributed between six and 20 years of teaching experience.
On the data practices front, the survey asked all 339 respondents how often they read the full terms of service of a new educational technology that they plan to use with students. Almost 60 percent reported reading the TOS less than 10 percent of the time, while just under 10 percent of respondents read TOS 90 percent of the time or more. That means that when we enter new online classroom spaces, or even choose our own new tools for teaching, fewer than one in 10 of us may be really clear on what those decisions mean for our students’ data. Or, potentially, our own.
On the knowledge front, the survey asked the 315 respondents who indicated they had previously taught with a learning management system whether they could name the country in which the servers for that LMS were stored. This matters because data hosted on servers outside an institution’s home country may be subject to the governing legislation of that country, and various countries take varied policy approaches to student data privacy.
Turns out? A full two-thirds (65.8 percent) of the respondents who’d taught using an LMS did not know where the servers for that LMS were housed. Now, unlike TOS, LMS decisions are made by institutions rather than individual educators. But clearly the rationale behind why those decisions matter is not being communicated clearly within campus communities.
Educators’ data literacy is not being fostered, even in the midst of higher education’s profound metamorphosis into a datafied sector. In exchanging the familiar walls of the classroom for digital platforms, we’ve handed over control of the core structures of higher education to systems that are designed for data extraction and profit.
What Do We Do About Data?
The survey findings represent a wakeup call to academia: it’s time to educate our campus communities about the data implications of tools that -- in an extraordinarily short window of time -- have come to effectively constitute a huge proportion of teaching environments.
Make no mistake, though: the survey findings are not an indictment of educators. The legalese of TOS is seldom communicated through a pedagogical filter. LMS analytics and underpinnings have been approached by campuses primarily as IT territory to be optimized, not as a new literacy that impacts governance. But if instructors want to continue to steward the shared governance of the academy, we do need to become cognizant of its contemporary infrastructures. And so do our students and anyone else on campus who might bear the risks of algorithmic decision making.
We need a policy approach to data ethics, and we need it now. The landscape of data privacy has been framed as a technical and legal issue, when the reality is an ethical one: students at academic institutions around the world are subject to being tracked and surveilled by learning platforms in the course of daily studies.
I propose that campuses and institutional systems gather their pedagogical experts, technical experts and decision makers to translate, consolidate and communicate key information about classroom tools and data. Transparent data governance arrangements, plain language “what this means for you” pop-ups for faculty and students, and clear policy guidance about data implications would help build data literacies across campuses, protecting faculty, staff and students not just from privacy risks but from unintended violations.
Just as higher education developed sectorwide ethical norms and protocols governing research on human subjects over the 20th century, so too it’s now time for academia to develop ethics-based policy surrounding the data that our institutions generate every day. If our sector does not mount its own ethics- and policy-focused challenge to the cultural norm of uninformed assent to datamining, sectors with far more direct profit motives are unlikely to do so.
And we may someday look back on this pandemic moment as one in which we failed to think about safety broadly enough.
Bonnie Stewart is an educator and social media researcher interested in what digital networks mean for institutions and society. Assistant professor of online pedagogy and workplace learning in the University of Windsor Faculty of Education, Canada, Bonnie was an early MOOC researcher and ethnographer of Twitter. Bonnie's current research interests include the data literacies of educators and what it means to know, to learn and to be a citizen in our current information ecosystem.