You have /5 articles left.
Sign up for a free account or log in.
MicroStockHub/iStock/Getty Images Plus
Despite the attention generative artificial intelligence has received in higher education, it is often discussed solely as a technological innovation rather than a pedagogical one. However, viewing it through a pedagogical lens is crucial for understanding its full potential and the ways it will shape both teaching practices and student learning outcomes.
I recently co-authored a paper with Judith A. Giering in Innovative Higher Education that added two key contributions to the literature: a more comprehensive definition of pedagogical innovation and a valid taxonomy capable of precisely characterizing the range of pedagogical innovations in higher education.
Drawing inspiration from a handful of definitions in the literature, my colleague and I define a pedagogical innovation as “an adaptation of a commonly employed teaching practice or activity or a distinctly new, creative one intended to increase 1) educational equity, 2) student retention or persistence, 3) sense of belonging, 4) learning, 5) engagement, 6) instructor engagement, or 7) efficiency.” While generative AI is not inherently a pedagogical innovation, its applications in higher education have the potential to influence all seven of these levers. But how strongly can generative AI nudge—or perhaps, push—these levers toward meaningful educational change?
Before answering, let me draw from a key finding in educational literature. In 1984, the educational psychologist Benjamin Bloom published a seminal paper titled “The 2 Sigma Problem.” His research compared conventional teaching methods to a mastery teaching approach combined with one-on-one tutoring.
Conventional teaching—the predominant instructional model at all levels of education—follows a one-teacher-to-many-students format with periodic testing to assess learning. Students progress through content regardless of their performance on these tests. In contrast, mastery teaching integrates frequent formative assessments and feedback loops, ensuring students do not progress until they demonstrate mastery of prior material.
When paired with one-on-one tutoring, mastery teaching becomes even more powerful, offering precise, timely feedback and individualized corrective procedures.
Here’s the interesting part: Bloom discovered that students in mastery teaching environments with one-on-one tutoring performed two standard deviations better on achievement tests than those in conventional classrooms. This is the essence of “The 2 Sigma Problem”—replicating the benefits of one-on-one tutoring in group instruction. Bloom challenged the educational community to devise conditions that would allow most students to reach these higher levels without needing human tutors.
Many have since theorized that AI-powered tutors, particularly advanced large language models and multimodal bots, could provide the solution Bloom envisioned. These AI tutors offer personalized, patient and always-available assistance, regardless of a student’s prior academic background or preparation.
Recent research reported in a working paper offers promising evidence. In a randomized, controlled study of nearly 250 physics students, Gregory Kestin and colleagues compared learning gains and motivation between students learning in a traditional active learning classroom environment and those learning in partnership with an AI-powered tutor. The results were compelling: Students using AI tutors learned more than twice as much in less time than those in active learning classrooms. Furthermore, students using AI tutors reported higher engagement and motivation when solving difficult problems.
While further research is needed to generalize these findings across different institutional contexts and more diverse student populations, this study highlights how generative AI tutors could replicate the benefits Bloom observed with human tutors.
But what about the other innovation levers? It’s plausible to expect that AI tutors could also improve educational equity. Several studies have shown that generative AI disproportionately benefits workers with lower initial ability, suggesting that AI tutors could help close opportunity gaps—assuming equitable access to the best AIs available. This, in turn, could lead to improved student persistence and retention rates.
However, the impact of AI on students’ sense of belonging is less straightforward. While AI tutors offer many advantages, immersive technologies often correlate with increased isolation, loneliness and anxiety. Learning in isolation with an AI tutor could exacerbate these issues, complicating efforts to foster a sense of community and belonging among students.
On the instructional side, AI-driven instructional coaches offer considerable promise for increasing teacher engagement and efficiency. Already, bots are helping educators to design more effective courses, develop engaging assignments and streamline feedback and grading processes. As these AI tools evolve, they may not only make teaching more efficient but also inspire educators by alleviating administrative burdens, allowing them to focus more on creativity and innovation in their teaching.
Thus far, I have only discussed one of six domains from the taxonomy I mentioned in my introductory remarks, the intended outcomes of an innovation. I will now turn briefly to the other five domains: degree of innovation, focus of innovation, barriers to adoption, risks of adoption and costs.
Degree of Innovation
The degree of innovation can be understood as its disruptive potential—how far the pedagogical innovation diverges from current practices. One effective framework for evaluating this characteristic is the SAMR model, which categorizes innovations into four levels: substitution, adaptation, modification and redefinition. Let me illustrate each of these disruption levels with an example.
Imagine an instructor who frequently polls students in class to gauge surface-level engagement. Previously, they relied on hand-raising or a show of fingers, but now they use electronic polling software to collect answers. This is an example of substitution, where technology replaces an instructional or learning activity without fundamentally changing or improving it.
Now, imagine the instructor using polling software to display real-time data analytics on students’ understanding of the material, enabling them to adjust their teaching in the moment. This represents adaptation, as the technology enhances the original activity by adding functional improvements.
Next, envision the instructor leveraging both the polling software and the instructional strategy known as peer instruction to help students co-construct knowledge. Here, students first answer a poll, discuss their reasoning with peers and then vote again. This is modification because it transforms the learning activity into something more collaborative and dynamic, altering the original activity in a meaningful way.
Finally, picture a student who, despite participating in peer instruction, still struggles with the material. They turn to a LLM bot, which creates custom questions, quizzes them, provides corrective feedback and repeats the cycle until mastery is achieved. This happens without the involvement of the instructor, other students, textbooks or premade quiz banks. This is an example of redefinition, where AI makes possible a personalized, responsive learning experience that fundamentally alters the traditional learning process.
Interestingly, most educational technologies tend to disrupt only one or two specific aspects of teaching or learning, and they rarely lead to full redefinition of pedagogical practices. They often serve as tools for substitution or adaptation, adding efficiency or convenience but not fundamentally altering the learning experience. Even in the examples provided, two pedagogical innovations—polling software and peer instruction—were needed to reach the modification level. Generative AI, however, stands apart; it simultaneously disrupts multiple aspects of teaching and learning, often pushing toward redefinition. It has the potential to create entirely new ways of engaging with content, personalizing learning and supporting both students and instructors in ways that traditional methods and tools cannot.
Focus of Innovation
The focus of innovation describes the key pedagogical areas where the innovation has an impact. For instance, does it introduce novel ways to:
- Assess or evaluate student learning: Can it provide more personalized, formative feedback or enable new forms of assessment that were previously impractical?
- Deliver course content or expand access: Does it facilitate more flexible and interactive ways for students to engage with material?
- Help instructors develop pedagogical beliefs, knowledge or skills: Can it support professional development, offering instructors new strategies to foster student-centered, learning-focused environments?
- Support teaching and learning activities: Does it enhance in-class or out-of-class activities, streamlining tasks for both instructors and students?
- Design courses or curricula: Can it assist in creating more dynamic, adaptable course designs tailored to diverse student needs?
Most pedagogical innovations tend to focus on one or two of these areas. However, generative AI has the potential to simultaneously influence many—perhaps all—aspects of teaching and learning. It can, for example, introduce new methods for assessing student learning, such as adaptive quizzes and real-time feedback; broaden access to personalized learning materials; support instructors through course, assignment and activity design; and streamline both teaching tasks and student study practices. In previous sections, I have already hinted at how some of these possibilities might unfold, but the true extent of generative AI’s influence on each of these areas remains unknown and open to exploration.
Barriers, Risks and Costs
Like other pedagogical innovations, each potential benefit of generative AI—whether it’s improving student learning, increasing instructor efficiency or achieving other desirable outcomes—comes with a corresponding challenge. For example, while generative AI offers unprecedented opportunities for personalized learning, it also tempts students to outsource cognitive tasks, potentially leading to a shallow understanding of complex concepts. Similarly, while AI can provide competent, on-demand instructional coaching, it may encourage educators to offload crucial instructional design decisions, potentially resulting in uninspired and uninspiring teaching.
In short, despite its transformative potential, the use of generative AI in teaching and learning comes with unique barriers, risks and costs. These challenges must be carefully addressed to ensure that AI enhances, rather than undermines, education.
Barriers, as my co-author and I define the term in our recent paper, are the “known knowledge, physical, structural or cultural obstacles that complicate or impede the innovation and negatively impact the intended outcomes.” A few commonly recognized barriers to adopting generative AI include:
- Lack of knowledge: Many instructors and students lack formal opportunities to learn how to effectively and efficiently leverage generative AI tools for teaching and learning.
- Limited access: Not all instructors or students have access to the most advanced generative AI tools, limiting their ability to fully benefit from these innovations.
- Bias and misinformation: Generative AI tools frequently produce misleading, factually incorrect and biased information, which can undermine educational goals.
Risks, on the other hand, we define as the “potential knowledge, physical, structural or cultural obstacles that may complicate or impede the innovation and negatively impact the intended outcomes.” These risks can be mitigated or eliminated with thoughtful adoption strategies. Examples of risks associated with generative AI adoption include:
- Educational inequity: If some students have access to advanced AI tools and training while others do not, existing opportunity gaps will widen, exacerbating educational inequality.
- Academic fraud: Without clear guidelines on the appropriate use of generative AI, and if institutions fail to adapt to how it is transforming learning and assessment, instances of academic fraud—whether intentional or unintentional—are likely to increase.
- Overdependence: Overreliance on generative AI tools for tasks such as content creation, feedback or tutoring may reduce students’ development of critical thinking and problem-solving skills. If students begin to depend on AI for cognitive tasks, they may struggle to build the foundational knowledge and independent learning strategies necessary for academic and professional success.
Finally, costs refer to the “one-time and ongoing direct and indirect costs associated with the development, implementation or sustainability of an innovation.” (While it is tempting to focus solely on monetary expenses—and these are certainly significant for generative AI tools—other costs may have a more profound impact. Here are a few key examples:
- Intellectual property: Large language models and other generative AI tools are trained on data sets that include intellectual and creative works under copyright. This raises ethical and legal questions about data usage, ownership and compensation for creators.
- Environment: The training of large language models consumes significant amounts of nonrenewable energy. It also requires vast quantities of water to cool data centers and large amounts of rare metals to produce graphics processing units, contributing to environmental degradation and resource depletion.
- Ill-prepared workforce: Failing to adequately prepare students to use generative AI effectively in their future careers comes with an opportunity cost. Without the skills to harness AI in meaningful ways, students may find themselves at a disadvantage in an increasingly AI-driven job market.
GPT = General Purpose Technology?
To close my analysis of generative AI as a pedagogical innovation, I want to highlight its general purpose nature. Whether by coincidence or design, “GPT” not only stands for “generative pretrained transformer” but also a type of innovation known as a “general purpose technology”—one that has broad economic and societal impact. General purpose technologies are defined by three characteristics: widespread proliferation, continuous improvement and the ability to spur further innovations. Common examples include electricity, computers and the internet. While it usually takes decades to identify such technologies, some already claim that LLMs fall into this category.
There are signs suggesting that generative AI is also a general purpose pedagogical innovation. This implies that: 1) its use in teaching and learning will become ubiquitous, 2) it will continuously improve as a tutor and instructional coach, and 3) it will lead to new, as yet unforeseen, pedagogical innovations.
As generative AI continues to evolve, it will reshape the future of education in ways we are only beginning to grasp. However, for it to truly revolutionize teaching and learning, we must be intentional in how we integrate it—ensuring that it not only enhances current practices but also expands the horizons of what is possible in education.