You have /5 articles left.
Sign up for a free account or log in.

Higher education already employs artificial intelligence in a number of effective ways—course and facilities scheduling, student recruitment campaign development, endowment investments and support, and many other operational activities are guided by AI at large institutions. The programs that run AI—algorithms—can use big data to project or predict outcomes based on machine learning, in which the computer “learns” to adapt to a myriad of changing elements, conditions and trends.

Adaptive learning is one of the early applications of AI to the actual teaching and learning process. In this case AI is employed to orchestrate the interaction between the learner and instructional material. This enables the program to most efficiently guide the learner to meet desired outcomes based upon the unique needs and preferences of the learner. Using a series of assessments, the algorithm presents a customized selection of instructional materials adapted to what the learner has demonstrated mastery over and what the learner has yet to learn. This method efficiently eliminates needless repetition of material already learned while advancing through the content at the pace of the learner ensuring that learning outcomes are accomplished.

There is great room for further growth of AI in higher ed, as Susan Fourtané writes in Fierce Education:

The potential and impact of AI on teaching have prompted some colleges and universities to take a closer look at it, accelerating its adoption across campuses. For perspective, the global AI market is projected to reach almost $170 billion by 2025. By 2028, the AI market size is expected to gain momentum by reaching over $360 billion, registering a growth rate of 33.6 percent between 2021 and 2028, according to a research firm Fortune Business Insights’ report. The market is mostly segmented into Machine Learning, Natural Language Processing (NLP), image processing, and speech recognition.

One of the pioneers in applying AI to supporting learning at the university level, Ashok Goel of Georgia Tech, famously developed “Jill Watson,” an AI program to serve as a virtual graduate assistant. Since “Jill’s” first semester in 2016, Goel has repeatedly and incrementally improved the program, expanding the potential to create additional AI assistants. The program is becoming increasingly affordable and replicable:

The first iteration of Jill Watson took between 1,000 and 1,500 person hours to complete. While that’s understandable for a groundbreaking research project, it’s not a feasible time investment for a middle school teacher. So Goel and his team set about reducing the time it took to create a customized version of Jill Watson. “Now we can build a Jill Watson in less than ten hours,” Goel says. That reduction in build time is thanks to Agent Smith, a new creation by Goel and his team. All the Agent Smith system needs to create a personalized Jill Watson is a course syllabus and a one-on-one Q&A session with the person teaching it … “In a sense, it’s using AI to create AI,” Goel says, “which is what you want in the long term, because if humans keep on creating AI, it’s going to take a long time.”

Increasingly, many students are accustomed to interacting with AI-driven chat bots. Serving in a wide range of capacities at colleges, the chat bots commonly converse in text or computer-generated speech using natural language processing. These algorithms may even create a virtual relationship with the students. Such is the case with a chat bot named “Oli” tested by Common App. For 12 months this chat bot communicated with half a million students of the high school Class of 2021 twice a week to guide them through the college application process. In addition to the pro forma steps in the application process, Oli would offer friendly reminders to students to look after themselves in these COVID times, including suggestions to remind them to keep in touch with friends, listen to favorite music or take deep breaths. When the process was complete, Oli texted.

“Hey pal,” Oli said one week before officially signing off, “I wanted to let you know that I have to say goodbye soon. Remember, even without me, you’re never alone. Don’t hesitate to reach out to your advisor or close ones if you need help or someone to talk to. College isn’t easy, but it’s exciting and you’re so ready!” The relationship might have ended there. But some of Oli’s human correspondents had more to say. Hundreds of them texted back, effusive in their praise for the support the chatbot had offered as they pursued college. Research about social robots shows that children view them as “sort of alive” and make “an attempt to build a mutual relationship,” writes MIT professor Sherry Turkle. It’s a type of connection, a “degree of friendship,” that excites some researchers and worries others.

Just last month, Google announced a new AI tutor platform to give students personalized feedback, assignments and guidance. Brandon Paykamian writes in GovTech,

[Google Head of Education] Steven Butschi described the product as an expansion of Student Success Services, Google’s software suite released last year that includes virtual assistants, analytics, enrollment algorithms and other applications for higher ed. He said the new AI tutor platform collects “competency skills graphs” made by educators, then uses AI to generate learning activities, such as short-answer or multiple-choice questions, which students can access on an app. The platform also includes applications that can chat with students, provide coaching for reading comprehension and writing, and advise them on academic course plans based on their prior knowledge, career goals and interests.

With all of these AI applications in development and early release phases, questions have arisen as to how we can best ensure that biases are avoided in AI algorithms used in education. At the same time concerns have been raised that we make sure that learners recognize these are computer programs rather than direct communication with live instructors, that privacy of learners is maintained, and related concerns about the use of AI. The federal Office of Technology and Science Policy is gathering information with the intention of creating an AI Bill of Rights. Generally, the AI bill of rights is meant to “clarify the rights and freedoms” of persons using, or who are subject to, data-driven biometric technologies.

How is your institution preparing to integrate reliable, cost-effective and efficient AI tools for instruction, assessment, advising and deeper engagement with learners? Are the stakeholders—including faculty, staff, students and the broader community—included in the process to facilitate the broadest input and ensure the advantages and intended outcomes from the use of AI?

Next Story

Written By