You have /5 articles left.
Sign up for a free account or log in.

A year ago, I saw artificial intelligence as a shortcut to avoid deep thinking. Now, I use it to teach thinking itself.

Like many educators, I initially viewed artificial intelligence as a threat—an easy escape from rigorous analysis. But banning AI outright became a losing battle. This semester, I took a different approach: I brought it into my classroom, not as a crutch, but as an object of study. The results surprised me.

For the first time this spring, my students are not just using AI—they are reflecting on it. AI is not simply a tool; it is a mirror, exposing biases, revealing gaps in knowledge and reshaping students’ interpretive instincts. In the same way a river carves its course through stone—not by force, but by persistence—this deliberate engagement with AI has begun to alter how students approach analysis, nuance and complexity.

Rather than rendering students passive consumers of information, AI—when engaged critically—becomes a tool for sharpening analytical skills. Instead of simply producing answers, it provokes new questions. It exposes biases, forces students to reconsider assumptions and ultimately strengthens their ability to think deeply.

Yet too often, universities are focused on controlling AI rather than understanding it. Policies around AI in higher education often default to detection and enforcement, treating the technology as a problem to be contained. But this framing misses the point. The question in 2025 is not whether to use AI, but how to use it in ways that deepen, rather than dilute, learning.

AI as a Tool for Deep Engagement

This semester I’ve asked students to use AI in my seminar on Holocaust survivor testimony. At first glance, using AI to analyze these deeply human narratives seems contradictory—almost irreverent. Survivor testimony resists coherence. It is shaped by silences, contradictions and emotional truths that defy categorization. How can an AI trained on probabilities and patterns engage with stories shaped by trauma, loss and the fragility of memory?

And yet, that is precisely why I have made AI a central component of the course—not as a shortcut to comprehension, but as a challenge to it. Each week, my students use AI to transcribe, summarize and identify patterns in testimonies. But rather than treating AI’s responses as authoritative, they interrogate them. They see how AI stumbles over inconsistencies, how it misreads hesitation as omission, how it resists the fragmentation that defines survivor accounts. And in observing that resistance, something unexpected happens: students develop a deeper awareness of what it means to listen, to interpret, to bear witness.

AI’s sleek outputs conceal a deeper problem: It is not neutral. Its responses are shaped by the biases embedded in its training data, and by its relentless pursuit of coherence—even at the expense of accuracy. An algorithm will iron out inconsistencies in testimony, not because they are unimportant, but because it is designed to prioritize seamlessness over contradiction, clarity over ambiguity. But testimony is ambiguity. Memory thrives on contradiction. If left unchecked, AI’s tendency to smooth out rough edges risks erasing precisely what makes survivor narratives so powerful: their rawness, their hesitations, their refusal to conform to a clean, digestible version of history.

For educators, the question is not just how to use AI but how to resist its seductions. How do we ensure that students scrutinize AI rather than accept its outputs at face value? How do we teach them to use AI as a lens rather than a crutch? The answer lies in making AI itself an object of inquiry—pushing students to examine its failures, to challenge its confident misreadings. AI does not replace critical thinking; it demands it.

AI as Productive Friction

If AI distorts, misinterprets and overreaches, why use it at all? The easy answer would be to reject it—to bar it from the classroom, to treat it as a contaminant rather than a tool. But that would be a mistake. AI is here to stay, and higher education has a choice: either leave students to navigate its limitations on their own or make those limitations part of their education.

Rather than treating AI’s flaws as a reason for exclusion, I see them as opportunities. In my classroom, AI-generated responses are not definitive answers but objects of critique—imperfect, provisional and open to challenge. By engaging with AI critically, students learn not just from it, but about it. They see how AI struggles with ambiguity, how its summaries can be reductive, how its confidence often exceeds its accuracy. In doing so, they sharpen the very skills AI cannot replicate: skepticism, interpretation and the ability to challenge received knowledge.

This approach aligns with Marc Watkins’s observation that “learning requires friction.” AI can be a force of productive friction in the classroom. Education is not about seamlessness; it is about struggle, revision and resistance.

Teaching history—and especially the history of genocide and mass violence—often feels like standing on a threshold: one foot planted in the past, the other stepping into an uncertain future. In this space, AI does not replace the act of interpretation; it compels us to ask what it means to carry memory forward.

Used thoughtfully, AI does not erode intellectual inquiry—it deepens it. If engaged wisely, it sharpens—rather than replaces—the very skills that make us human.

Jan Burzlaff is a postdoctoral associate in the Jewish Studies program at Cornell University.

Next Story

Written By

Share This Article

More from Views