You have /5 articles left.
Sign up for a free account or log in.

TJ Kalaitzidis, assistant director of classroom technology and innovation at Brown University, describes himself as a learning scientist, philosopher, educator and designer. With a Ph.D. from U-W Madison in digital media and learning and a career that includes teaching and instructional design, TJ brings a wealth of knowledge and experience to advancing learning. Recently, TJ shared with me a white paper he authored, titled “How Generative AI Fixes What Higher Education Broke,” which inspired me to send him three questions to answer.

TJ Kalaitzidis, a man with tan skin and a bald head and dark beard with some gray, wearing a blue collared shirt with the top two buttons open.

Q: Tell us about your role at Brown. How do you integrate your interests and expertise in learning science, philosophy and design into your work?

A: As you mentioned, I currently serve as the assistant director of classroom technology and innovation at Brown, though like most titles, it only hints at my real work. My role is somewhat chimeric: situated at the intersection of pedagogy, systems design and technology strategy. On paper, I partner with faculty to integrate digital learning tools. In practice, I interrogate the assumptions those tools encode and use that framing to rethink teaching practices, faculty mindsets and institutional policies.

This second-order task has historically been more covert. The recent fervor around gen AI has created an opening to bring it into the open.

I draw heavily from my background in the learning sciences and teaching theory to frame what “good” learning actually looks like; this usually puts my ideas in a sharp contrast to traditional models. My prior work in philosophy helps me push beyond pedagogy to ask deeper epistemological questions: What counts as learning? Who gets to decide? And my training in design allows me to give those questions form. At Brown, I’ve helped prototype interfaces, systems and experiences (some successful, some very instructively not) to explore new ways of thinking about learning in practice.

Q: Let’s talk about “How Generative AI Fixes What Higher Education Broke.” What are the main arguments you make in the white paper and what do you hope the impact of your thinking might be on how universities approach gen AI?

A: The white paper’s core argument stems from a few motivations: 1) The ever-growing discourse about gen AI/education that seems skim the surface of the problem; 2) faculty apprehension and dissatisfaction with current attempts at “integration,” and 3) a sense that my prior work in both learning sciences and philosophy might prove useful is this moment. Maybe I am wrong! But I think it is at least worth sharing and letting the audience judge.

The core argument is this: Gen AI itself hasn’t disrupted higher ed. It has revealed latent incoherences that we’ve been tacitly managing for decades and makes them impossible to ignore.

For at least 50 years, we’ve clung to a formalist model of education: measuring “learning” by how well students reproduce symbolic knowledge structures under artificial constraints (I am, of course, pointing to normative cultural practices of higher ed). But gen AI can now fluently perform those tasks at scale. To me, this raises an uncomfortable question: If a machine can do it, are our pedagogies delivering learning as we promised?

I suggest the answer is no.

Many academic structures (grading, traditional assessments, etc.) aren’t just pedagogical tools; they’re artifacts that reveal a conflation of proxies for learning with learning itself. Once that conflation is exposed, alternatives emerge: Learning should be understood not as the ability to mimic forms, but as the capacity to apply, reframe, question and transfer knowledge. That means asking, can the learner do something meaningful with what they know? Can they adapt it? Transform it? Explain it to peers? Critique it? These are dimensions where gen AI struggles in isolation, but where it can propel humans to act with greater clarity, creativity and precision.

This conception doesn’t reject knowledge; it reorients it. It distinguishes between knowing about and knowing within. The former can be simulated, as evidenced by gen AI. The latter must be lived, enacted and tested in context by people, with tools. In this sense, I frame gen AI as another cognitive tool we must learn to accept and integrate.

I simply hope this paper sparks better questions. I do offer some practical guidance around LLM architecture, critical AI literacy and other applications. But generally, the point is this: If we treat gen AI as a threat to education, I think we are doomed. We should treat it as a diagnostic tool that forces us to confront what we actually value. Universities can either react by doubling down on surveillance and control, or they can treat this as a moment for reflection and design something better.

Q: What advice do you have for current Ph.D. students and early-career academics interested in pursuing an alternative academic path? Why did you choose this path and what have you learned along the way that you can pass on?

A: “Choose” might be too strong a word. Life circumstances had a significant sway in shaping my path, and like many who end up in alt-ac roles, I followed opportunity more than any grand plan. That said, I consider myself incredibly lucky and genuinely grateful to have landed where I have.

While the constraints of alt-ac careers are often discussed, the benefits tend to fly under the radar. Here’s what I’ve found:

  1. I’m disciplinarily unsiloed.

I don’t owe allegiance to any one field’s intellectual orthodoxies. That means I can move between discourses fluidly, synthesize ideas across domains and see/challenge boundaries others take for granted. If you’re an integrative or border-thinking scholar, alt-ac offers a rare and generative vantage point.

  1. I get to actually make change happen.

In grad school, I studied how to design learning environments. The traditional academic pathway was clear: run controlled interventions, publish the results, hope someone eventually reads them and wonder if they get implemented. In my current role, I can still research if I want, but more often I get to build. Most days it’s small-i innovation, but occasionally I get to engage with big-I problems (like gen AI) and help institutions think differently in real time.

  1. My success depends on relationships, not citations.

This is perhaps the most underappreciated truth of alt-ac life. I’m not under pressure to publish or perish. Instead, my influence is tied to my ability to build trust, form coalitions and translate complex ideas into institutional action. It’s not easier; it’s just a different kind of hard. And for me, it’s a better fit.

Next Story

Written By

Share This Article

More from Learning Innovation