You have /5 articles left.
Sign up for a free account or log in.

gmast3r/iStock/Getty Images Plus
Almost a third of students report that they don’t know how or when to use generative AI to help with coursework. On our campus, students tell us that they worry if they don’t learn how to use AI, they will be left behind in the workforce. At the same time, many students worry that technology undermines their learning.
Here’s Gabby, an undergraduate on our campus: “It turned my writing into something I didn’t say. It makes it harder for me to think of my ideas and makes everything I think go away. It replaces it with what is official. It is correct, and I have a hard time not agreeing with it once ChatGPT says it. It overrides me.”
Students experience additional anxiety around accusations of unauthorized use of AI tools—even when they are not using them. Here’s another student: “If I write like myself, I get points off for not following the rubric. If I fix my grammar and follow the template, my teacher will look at me and assume I used ChatGPT because brown people can’t write good enough.”
Faculty guidance in the classroom is critical to addressing these concerns, especially as campuses increasingly provide students with access to enterprise GPTs. Our own campus system, California State University, recently rolled out an AI strategy that includes a “landmark” partnership with companies such as OpenAI, and a free subscription to Chat GPT Edu for all students, faculty and staff.
Perhaps unsurprisingly, students are not the only ones who feel confused and worried about AI in this fast-moving environment. Faculty also express confusion about whether and under what circumstances it is OK for their students to use AI technology. In our roles at San Francisco State University’s Center for Equity and Excellence in Teaching and Learning (CEETL), we are often asked about the need for campuswide policies and the importance of tools like Turnitin to ensure academic integrity.
As Kyle Jensen noted at a recent American Association of Colleges and Universities event on AI and pedagogy, higher ed workers are experiencing a perceived lack of coherent leadership around AI, and an uneven delivery of information about it, in the face of the many demands on faculty and administrative time. Paradoxically, faculty are both keenly interested in the positive potential of AI technologies and insistent on the need for some sort of accountability system that punishes students for unauthorized use of AI tools.
The need for faculty to clarify the role of AI in the curriculum is pressing. To address this at CEETL, we have developed what we are calling “Three Laws of Curriculum in the Age of AI,” a play on Isaac Asimov’s “Three Laws of Robotics,” written to ensure that humans remained in control of technology. Our three laws are not laws, per se; they are a framework for thinking about how to address AI technology in the curriculum at all levels, from the individual classroom to degree-level road maps, from general education through graduate courses. The framework is designed to support faculty as they work their way through the challenges and promises of AI technologies. The framework lightens the cognitive load for faculty by connecting AI technology to familiar ways of designing and revising curriculum.
The first law concerns what students need to know about AI, including how the tools work as well as their social, cultural, environmental and labor impacts; potential biases; tendencies toward hallucinations and misinformation; and propensity to center Western European ways of knowing, reasoning and writing. Here we lean on critical AI to help students apply their critical information literacy skills to AI technologies. Thinking about how to teach students about AI aligns with core equity values at our university, and it harnesses faculty’s natural skepticism toward these tools. This first law—teaching students about AI—offers a bridge between AI enthusiasts and skeptics by grounding our approach to AI in the classroom with familiar and widely agreed-upon equity values and critical approaches.
The second part of our three laws framework asks what students need to know in order to work with AI ethically and equitably. How should students work with these tools as they become increasingly embedded in the platforms and programs they already use, and as they are integrated into the jobs and careers our students hope to enter? As Kathleen Landy recently asked, “What do we want the students in our academic program[s] to know and be able to do with (or without) generative AI?”
The “with” part of our framework supports faculty as they begin the work of revising learning outcomes, assignments and assessment materials to include AI use.
Finally, and perhaps most crucially (and related to the “without” in Landy’s question), what skills and practices do students need to develop without AI, in order to protect their learning, to prevent deskilling and to center their own culturally diverse ways of knowing? Here is a quote from Washington University’s Center for Teaching and Learning:
“Sometimes students must first learn the basics of a field in order to achieve long-term success, even if they might later use shortcuts when working on more advanced material. We still teach basic mathematics to children, for example, even though as adults we all have access to a calculator on our smartphones. GenAI can also produce false results (aka ‘hallucinations’) and often only a user who understands the fundamental concepts at play can recognize this when it happens.”
Bots sound authoritative, and because they sound so good, students can feel convinced by them, leading to situations where bots override or displace students’ own thinking; thus, their use may curtail opportunities for students to develop and practice the kinds of thinking that undergird many learning goals. Protecting student learning from AI helps faculty situate their concerns about academic integrity in terms of the curriculum, rather than in terms of detection or policing of student behaviors. It invites faculty to think about how they might redesign assignments to provide spaces for students to do their own thinking.
Providing and protecting such spaces undoubtedly poses increased challenges for faculty, given the ubiquity of AI tools available to students. But we also know that protecting student learning from easy shortcuts is at the heart of formal education. Consider the planning that goes into determining whether an assessment should be open-book or open-note, take-home or in-class. These decisions are rooted in the third law: What would most protect student learning from the use of shortcuts (e.g., textbooks, access to help) that undermine their learning?
University websites are awash in resource guides for faculty grappling with new technology. It can be overwhelming for faculty, to say the least, especially given high teaching loads and constraints on faculty time. Our three laws framework provides a scaffold for faculty as they sift through resources on AI and begin the work of redesigning assignments, activities and assessments to address AI. You can see our three laws in action here, in field notes from Jennifer’s efforts to redesign her first-year writing class to address the challenges and potential of AI technology.
In the spirit of connecting the new with the familiar, we’ll close by reminding readers that while AI technology poses new challenges, these challenges are in some ways not so different from the work of curriculum and assessment design that we regularly undertake when we build our courses. Indeed, faculty have long grappled with the questions raised by our current moment. We’ll leave you with this quote, from a 1991 (!) article by Gail E. Hawisher and Cynthia L. Selfe on the rise of word-processing technology and writing studies:
“We do not advocate abandoning the use of technology and relying primarily on script and print for our teaching without the aid of word processing and other computer applications such as communication software; nor do we suggest eliminating our descriptions of the positive learning environments that technology can help us to create. Instead, we must try to use our awareness of the discrepancies we have noted as a basis for constructing a more complete image of how technology can be used positively and negatively. We must plan carefully and develop the necessary critical perspectives to help us avoid using computers to advance or promote mediocrity in writing instruction. A balanced and increasingly critical perspective is a starting point: by viewing our classes as sites of both paradox and promise we can construct a mature view of how the use of electronic technology can abet our teaching.”