Twemoji (question mark image) and Just_Super from Getty Images Signature (AI photograph).
Since the release of ChatGPT in late 2022, many questions have been raised about the impact of generative artificial intelligence on higher education, particularly its potential to automate the processes of research and writing. Will ChatGPT end the college essay or prompt professors, as John Warner hopes, to revise our pedagogical ends in assigning writing? At Washington College, our Cromwell Center for Teaching and Learning organized a series of discussions this past spring motivated by questions: What is machine learning doing in education? How might we define its use in the classroom? How should we value it in our programs and address it in our policies? True to the heuristic nature of inquiry in the liberal arts and sciences, this series generated robust but unfinished conversations that elicited some initial answers and many more questions.
And yet, as we continue to raise important questions about AI while adapting to it, surprisingly few questions have been asked of AI, literally. I have come to notice that the dominant grammatical mood in which AI chatbot conversations are conducted or prompted is the imperative. As emphasized by the new conductors of “prompt engineering,” the skillful eliciting of output from the AI model that has emerged as a lucrative career opportunity, chatbots respond best to explicit commands. The best way to ask AI a question, it seems, is to stop asking it questions.
Writing in The New York Times “On Tech: AI” newsletter, Brian X. Chen defines “golden prompts” as the art of “asking questions” that will “generate the most helpful answers.” However, Chen’s prompts are all commands (such as “act as if you are an expert in X”), no interrogatives, and not even a “please” recommended for the new art of computational conversation. Nearly every recommendation I have seen from AI developers perpetuates this drifting of question-based inquiry into blunt command. Consider prominent AI adopter and Wharton School professor Ethan Mollick. Observing the tendency of students to get poor results from chatbot inquiry because they ask detailed questions, Mollick proposes a simple solution. Instead of guiding or instructing the chatbot with questions, Mollick writes, tell it what you want it to do and, a point made through an unnerving analogy, boss it like you would an intern.
Why should it matter that our newest writing and research technologies are rapidly shifting the modes and moods of inquiry from interrogatives to imperatives? Surely many seeking information from an internet search no longer phrase inquiry as a question. But I would agree with Janet H. Murray that new digital environments for AI-assisted inquiry do not merely add to existing modes of research, but instead establish new “expressive forms” with different characteristics, new affordances and constraints. First among these for Murray, writing in Hamlet on the Holodeck (MIT Press, 1998), is the procedural or algorithmic basis of digital communication. A problem-solving procedure, an algorithm follows precise rules and processes that result in a specific answer, a predictable and executable outcome.
Algorithmic procedure might provide a beneficial substructure for fictional narrative, driving a reader (or player, in the case of a video game) toward the resolution of a complex and highly determined plot. But algorithmic rules could also pose a substantial constraint for students learning to write an essay, where more open-ended heuristics, or brief, general rules of thumb and adaptive commonplaces, are more appropriate for composition that aims for context-contingent persuasion, plausibility not certainty.
Drawing on lessons from cognitive psychology, educator Mike Rose long ago addressed the problem of writer’s block in these very terms of algorithm and heuristic. Process and procedure are necessary for writing, but when writing is presented algorithmically, as a rigid set of rules to execute, developing writers can become cognitively blocked. Perhaps you remember, as I do, struggling to reconcile initial attempts at drafting an essay with a lengthy, detailed Harvard outline worked out entirely in advance. Rose’s seminal advice from 1980, that educators present learning prompts more heuristically and less absolutely, remains timely and appropriate for the new algorithms of AI.
In turning questions into commands, while still referring to them as questions, we perpetuate cognitive blocking while inducing, apparently, intellectual idiocy. (Ask better questions by not asking them?) We transform key rhetorical figures of inquiry like “question” and “conversation” into dead metaphor. Consider what is happening to the word “prompt.” Students know the word, at least for now, as a term of art in writing pedagogy: the guidelines for an assignment in which instructors identify the purpose, context and audience for the writing, preparing the grounds for the type of question-based inquiry the students will be pursuing. In The Craft of Research (University of Chicago Press), the late Wayne Booth and his colleagues refer to these heuristic guidelines as helping students make “rhetorically significant” choices.
Reaching back to classical rhetoric, heuristics such as Aristotle’s topics of invention or the four questions of stasis theory provide adaptive and responsive ideas and structures toward possible responses, not determined answers. When motivating questions are displaced by commands, AI-generated inquiry risks rhetorical unresponsiveness. When answers to unasked questions are removed from audience and context, the opaque information retrieved is no longer in need of a writer. The user can command not just the “answer” but also its arrangement, style and delivery. Since inquiry is offloaded to AI, why not the entire composition?
As educators we should worry, along with Nicholas Carr in The Glass Cage (W.W. Norton, 2014), about the cognitive de-skilling that attends the automation of intellectual inquiry. Writing before ChatGPT, Carr was already thinking about the ways that algorithmic grading programs might drift into algorithmic writing and thinking. As it becomes more efficient to pursue question-based inquiry without asking questions, we potentially lose more than the skill of posing questions. We potentially lose the means and the motivation for the inquiry. It is hard to be curious about ideas when information can be commanded.
As we continue to raise questions about AI, we need not resist all things algorithm. After all, we have been working and teaching with rule-based procedures long before the computer. But we can choose, as educators, to use emerging algorithmic tools more heuristically and with more rhetorically significant purpose. Rhetorically speaking, the best heuristics are simple concepts that can be applied to interrogate and sort through complex ideas, adapting prior knowledge to new contexts: What is X? Who values it? How might X be viewed from alternative perspectives? Such is inquiry, which, like education, can be guided but hardly commanded. If we are going to use AI tools to find and shape answers to our questions, we should generate and pose the questions.