You have /5 articles left.
Sign up for a free account or log in.
If you’ve been reading my posts here, you know that I am an AI skeptic when it comes to it being a good tool for teaching and learning. While I am not an abolitionist, my focus from the beginning has been on designing experiences and assessment strategies that make AI irrelevant to the work we ask students to do. I see it as a tool oriented around efficiency and productivity, not learning.
But I also know there are other people who share many, if not most, of the same values around teaching and learning who are experimenting with generative AI tools with the goal of helping improve their students’ engagement with the work of higher education. James Bedford is one of those folks, and when he told me about some of his experiments with AI in the classroom, I thought it would be useful for others to hear his perspective, the conditions from which it comes, and how it differs from my own. —John Warner
Guest Post: AI Meets Academia—Navigating the New Terrain
By James Bedford
The idea of universities ‘embracing’ AI technology has been met with justified concerns about students’ critical-thinking skills and whether they’ll develop an over-reliance on these tools. When ChatGPT shut down briefly this month, there was a flurry of concerned posts on X (formerly Twitter) which could be a sign of things to come. But one thing is certain: There is a growing divide amongst educators on whether AI should have any place at all in the lives of university students.
It’s important we have these discussions as we still don’t know what the long-term impacts of AI will be—in particular, what it could mean for the environment, or individuals’ mental health and well-being. Despite the potential risks, I find myself taking a more neutral standpoint (something I tell my students never to do). When it comes to the presence of AI in education, I have seen both the negative implications for student learning, and also, how AI ‘tools’ are helping demystify and increase accessibility for students trying to engage in the scholarly practices required to obtain a university degree.
Those who have been teaching academic skills for any period of time have noticed a disparity between what is expected from students and what students actually know. This gap exists, and it can leave students floundering, unable to see or understand what is required of them in their work and assessments. The lack of transparency about student expectations isn’t just a minor hurdle either—it fundamentally undermines student confidence and hampers their ability to engage with academic tasks meaningfully. Help-seeking behavior is good, but not great. Students are then set up to make a bunch of mistakes (which is fine, but usually reflected in their grades) and a major part of the university experience for students becomes a matter of adjusting to these largely unspoken expectations around academic writing and research behavior.
Let me give you an example. As an academic learning facilitator, one of the supports I offer is the one-to-one consultation. Students come from various faculties, skill levels and stages of the university lifecycle with questions about how to write an essay, or perhaps, how to improve their writing. The thing about these consults is that I’m not there to mark their work, I’m there to listen and help ‘facilitate’ the students’ own learning. As a result, students seem to be more open and honest about what they are struggling with. I see and hear about all kinds of things—from poor or harsh feedback from their instructors to how students are actually using AI tools, for better or worse.
What I have learned from my work with students over the last 10 years is that many educators seem to forget the following:
- academic skills aren’t explicitly taught and these skills are assumed knowledge.
- there is a stigma around asking for help relating to these skills because of the above.
Students often feel embarrassed asking for assistance relating to academic skills. This is due to feelings of inadequacy around not knowing. For example, it is expected students come to university already possessing the skills to write university standard essays or conduct literature reviews, when in fact, it might be the first time they’ve ever completed such tasks. Most have never accessed a journal database or even been taught how to write a proper thesis statement. Additionally, the methods they were previously taught might not align with the expectations of their institution.
The written feedback ‘we’ provide can also get in the way of things, messing with a students’ confidence and their learning. A few of my favorite and far too common examples of poor feedback from educators are the classic three question marks ‘???’ or ‘Awk’ dragged and dropped next to sentences that the student has struggled to articulate. Or perhaps the all too common ‘Ref’ next to a citation error (a full stop out of place, or perhaps the wrong use of italics). I spoke with a student recently who had lost all confidence in their academic writing ability after their first year toward a master's degree. One of their professors made the comment that their writing appeared to be at a high school level. Whether or not the comment was accurate (it wasn’t) is beside the point.
What I’m getting at is, when it comes to course work, educators can often forget what it’s like to know next to nothing about their subject. We expect students to become mini-experts on a topic in a matter of weeks, while juggling multiple other courses with multiple assessments with almost no discussion of the workload between these courses or when assessments are due. We then scrutinize their performance which is tested, graded, and measured through assessments that are largely broken, in a system that is barely coping due to the record-high student-to-teacher ratios and extensive casualization of the workforce. And then we have the nerve to tell students what tools they should and shouldn’t use to support their studies.
Of course, the alternative isn’t good either. We don’t want students mindlessly adopting these tools in ways that are counterproductive to their learning. There are already a flood of influencers pushing AI products onto students with ads on Instagram and Tik Tok. If universities are going to continue assuring the public that their graduates have met the required learning outcomes of their programs, we need to consider how students might be using these tools to bypass learning altogether. This means we absolutely must have people who are thinking critically about widespread adoption and who are holding ed-tech companies up to immense scrutiny.
However, showcasing the capabilities, and lack thereof, around generative AI applications has become an important skill students need to develop. Doing so empowers them to make their own informed decisions about appropriate-use cases and enables the development of critical-thinking skills essential for any responsible adoption of these tools. Students need agency in order to decide for themselves when engaging with AI tools might enhance their learning, and crucially, when they might not.
One of the great advantages of exploring these products with students is that it often leads to them wanting to develop their own skills further, particularly when they see the limitations and drawbacks in the outputs these tools provide. When I demonstrate one of the many AI research tools like Elicit or Consensus in workshops, the reaction from students is mostly positive. There are sighs of relief as I demonstrate how the user experience is embedded into some of these applications, making them far more accessible and easier to navigate than your standard scholarly databases. In addition, the ability to search articles with vector search technology, allows you to locate relevant papers without relying solely on keywords. Many students in their first year of university find using standard databases overly complicated and unnecessarily confusing. AI research tools help them, at least, begin to navigate a landscape that has always been notoriously difficult for them to engage with.
Based on multiple surveys I’ve run in workshops over the years it appears around 40 to 50 percent of students are using AI to support their studies, whereas for students with English as an Additional Language (aka English as a Second Language) it seems to be closer to 70 to 80 percent. These numbers should not be representative of any student population but do seem to be fairly consistent with other studies I’ve seen. While there are always going to be students who will choose to actively avoid using these tools altogether, and for various reasons, others find using AI tools can support their learning in productive and helpful ways.
Of course, for all the fantastic things these tools can do, nothing will ever replace human reasoning and intuition (hopefully). Research is about finding a unique narrative thread hidden between all the papers you’ve been taking in, which is very difficult to do if you’re incapable of understanding and are without human experience. But before dismissing the use of AI research tools, it’s useful to consider the expectations placed upon students already, and to understand why they might see these tools as useful. The requirements to produce decent academic writing and research seem absurd given many of these tasks are prescribed to undergrads who have little to no scholarly research experience. Students tend to ‘perform’ these tasks in a vacuum. For the most part, many of them will never have to write essays ever again.
If we are going to assign research-related tasks to students, then we should expect some of them to engage with these tools at some point. Engaging with these tools doesn’t have to reduce their capacity to engage with academic skills either. In many ways, using these tools well has become a new academic skill.
When considering whether to allow students to use generative AI (GenAI) tools for research, I always refer to the moments I’ve had showing how these tools work and what they can and can’t do. Some students seem excited to do something they previously couldn’t have cared less about. Others will look at them, and think, that’s not for me. Over all, this is not about making the process easier, or saving time, or “reducing friction.” AI is simply another, albeit flawed, source of inspiration some students might glean from in order to participate in what can often be a cagey, performative and at times, gated community.
If educators are already turning to AI tools to complete mindless tasks, why then wouldn’t students do the same? In addition, not all students are going to use these tools. In fact, many are actively deciding not to. Though this shouldn't mean we dismiss the potential benefits for those students who might find GenAI and AI tools useful. Instead, we should be educating them on responsible use, the limitations, and the importance of critical thinking in an age where AI-generated content is only going to become more and more ubiquitous. As educators grapple with the impact of GenAI on student learning, the best path forward is not one of outright bans or blind acceptance, but of equipping students with the skills to navigate this new terrain.