You have /5 articles left.
Sign up for a free account or log in.
From student use in assignments (both formally cited and unattributed) to faculty (grading, syllabi creation and research) and administrative use (budget balancing, staffing assignments, and marketing guidance), it was generative AI that made headlines and seeded disruption in the ways in which we conducted the enterprise of higher learning last year. These rapid changes took place in higher education, a field that traditionally evolves at a glacial rate. It brought anxiety to many in our field. With the advent of generative AI also came the fear of the existential impact of artificial general intelligence (AGI).
In the delivery of learning, we can expect some significant changes that will make our classes more personalized, more closely monitored and more likely to meet outcomes. Using generative AI, faculty members will be able provide students with tutors programmed to assist them in achieving designated learning outcomes. Built upon the effective model of Khanmigo, Khan Academy’s guide that serves as a tutor for learners and an assistant for instructors, this can provide personalized help to students. It can assess how students solve problem sets, identify problems and correct misunderstandings, and it can provide mini-tutorials to correct deficits in learning and reinforce successful learning. This GPT-powered tool was developed by Khan Academy with funding from OpenAI. Salman Khan, in a TED talk, said it can improve, by two standard deviations, the learning of students:
“Benjamin Bloom’s 1984 ‘Two Sigma’ study highlighted the benefits of one-to-one tutoring, which resulted in a two-standard-deviation improvement in students’ performance. Bloom referred to this finding as the ‘Two Sigma Problem,’ since providing one-to-one tutoring to all students has long been unattainable due to cost and scalability issues.”
Sal shared how AI has the potential to scale this tutoring economically and provide personalized instruction to students on a global level with the help of an AI-powered assistant. During his talk, Sal gave a live demo of Khanmigo.
I anticipate that these, and similar tools provided by other generative AI firms in 2024, will become widely available to support courses across the curriculum. Features include not only the obvious data-based features, but also freeform engagements with students. For example, Natasha Singer writes in The New York Times,
“Students can use it to take math quizzes, practice vocabulary words or prepare for Advanced Placement tests in subjects like statistics and art history. The tutoring bot also offers more playful, freeform features. Students can chat with a simulated fictional character like Lady Macbeth or Winnie-the-Pooh. They can collaborate on writing a story with Khanmigo. Or debate the tutorbot on topics like: Should students be allowed to use calculators in math class?”
I am particularly taken with the opportunity to co-author group projects with the chat bot (clearly citing who researched and wrote what in the project). These exercises will emulate the real-world work environment, where humans are already engaging bots in blended human/computer writing of reports, planning of initiatives and assessing of outcomes of previous approaches. The bot can also serve as “synthetic students” and post to discussion boards, emulating the responses of students from a wide variety of experiences, locations and perspectives. Using this technique, faculty can ensure that discussions include a wide variety of perspectives that challenge individual student views. This enables virtual student to real student exchanges in self-paced classes where no other students are at the same point in the course outline at any one time.
In essence, the addition of generative AI can support mastery learning, in which students only progress to the end of the class after they have achieved mastery level in every one of the modules in the class. Pedagogically, this is so very important to avoid gaps in current practices, in which a student can get above-average assessment scores in the majority of modules in a class but also fail one or two out of the 10 to 15 modules. This creates a flaw in the scaffolding of learning that may seriously impact later learning that assumes an acceptable level of knowledge in prior classes.
Generative AI will encourage improved learning outcomes that are accompanied by extensive data and course-enhancement recommendations provided to instructors so that they can improve their lectures, assignments and assessments. All of this will be provided by the apps in the blink of an eye. The apps this year will all become fully multimodal—that is, they will input text, speech, video and images, and also output in those media with webpages, analyses, spreadsheets, programs and more. As Captain Picard of the starship Enterprise famously said, “Make it so,” digital designers and instructors around the world will speak, show or say their prompt to the app, and generative AI will make it so in a matter of seconds.
Students and mentors will be able to construct personalized learning opportunities and priorities by merely articulating topics and outcomes in a prompt. Generative AI will lift heutagogical, self-determined approaches to learning into the mainstream. Given topics to be examined, generative AI will construct well-rounded research and learning agendas, including learning outcomes and assessments to meet the personalized learner’s needs.
Much has been written about the next generation in AI—that is, artificial general intelligence. I anticipate a level of AGI will be achieved in the next three to five years. However, I do not believe we will see an existential threat to humanity. I do not anticipate that there will be a single ubiquitous AGI system; rather, there will be dozens that will interact and compete, rather than conspire to eliminate humans. OpenAI has taken the potential threat seriously and has invested millions of dollars into an initiative to ensure human interests are paramount in all AGI activities. Further, the company has solicited and funded grants to a variety of institutions that are committed to super alignment research and strategies:
“While many experts believe these fears are overblown, OpenAI is already taking several steps to address safety concerns. The company recently announced it would be investing $10 million into super alignment research, in the form of $2 million grants to university labs, and $150,000 grants to individual graduate students. Open AI also revealed it would be dedicating a fifth of its computing power to the super alignment project, as it continues to preemptively research how to govern AGI.”
The year ahead promises to be even more exciting than the one we just left behind. Make sure that you and your colleagues keep up-to-date with the new developments and trends so that you may best serve your students and your institution.