You have /5 articles left.
Sign up for a free account or log in.
PhonlamaiPhoto/Istock/Getty Images Plus
The integration of Artificial Intelligence (AI) into nearly all aspects of daily life has sparked numerous debates as to the efficacy of using AI tools and whether doing so is cheating. Sports Illustrated was recently challenged for publishing AI-generated content under a made-up byline. The Writers Guild and Screen Actors’ strikes both focused, in part, on the reasonable use of generative AI in the creative process.
The field of education is not immune from these controversies. A lot has been written about the impact of AI on student work, but relatively little has focused on the role it should play for college instructors. As educators increasingly turn to AI for developing course materials, a pertinent question arises: Should instructors disclose their use of AI in this process?
As part of my end-of-course survey in a graduate class about the use of educational technology to transform instruction, I asked students that question. Only 25 percent said they felt instructors should disclose the use of generative AI in developing instructional materials. It wasn’t clear if that should be something as simple as a footnote on the syllabus that some materials were developed through the use of AI or a more detailed enumeration of what specific materials were generated with which generative tools. The students provided some anonymous insights.
One student noted, “If you worked hard to come up with the prompt and edited the content, I think you’re good!” This perspective underscores the importance of effort and personal input in the educational process. It shows the perspective that generative AI tools are much like any other resource. It does not diminish the instructor’s contribution to the course material. It’s the intellectual labor and pedagogical expertise of the educator that shape and refine the AI-generated content. The emphasis here is on the outcome—well-crafted and thoughtfully curated content—rather than the tools used to achieve it.
Some students showed a relative indifference toward instructors’ use of AI. Phrases like, “It doesn’t matter to me” or “I do not see the need for this” suggested a pragmatic approach. Students seem more concerned with the quality and relevance of the course material than how it was created. This attitude mirrors a growing trend in education where the focus is on outcomes and competencies. From that perspective, whether an instructor utilizes AI, consults with colleagues or employs traditional research methods is inconsequential if the educational objectives are effectively met. As one student mentioned, they often ask colleagues for ideas, but they don’t disclose that they gathered peer feedback on materials. Why should using a generative AI tool for feedback require a different approach?
This leads to an intriguing point raised by a student: “Soon, nearly all of us will use AI to develop at least some of our course materials ...!” This statement reflects an acknowledgment of the inevitable integration of AI into educational practices. It suggests a future where AI’s role in education is normalized, making the need for disclosure less relevant. As educators and students alike adapt to this technological advancement, the usage of AI might become as commonplace and unremarkable as using the internet for research.
Different Standards?
The conversation takes a more complex turn when considering the differing standards for students and instructors regarding AI use. One student reflects, “No, I don’t think it's necessary [to disclose]. I do think it’s interesting how instructors are allowed to use AI but students are advised not to.”
One local university identifies that any use of AI for assignment completion is against its academic integrity policy. As nearly all of my students use Grammarly or other tools with AI functions, such as Word’s spellcheck tool, this is a problematic black-and-white approach to addressing generative AI from a policy perspective. It can create a potential double standard in educational settings and raises questions about fairness and the ethical use of AI. While instructors might use AI to enhance the learning experience, students are often cautioned against relying on AI for their academic work. That highlights a need for a nuanced discussion about the role and rules of AI in different aspects of the educational process. It might also signal the need for a clear definition of AI in policy. One might assume that the policy banning AI is really focused on generative AI tools, but the policy needs to be exceptionally clear so that students and faculty are not confused.
My personal practice is to be more open about how I use generative AI. I discuss with my students how I am using AI to help develop materials. When I do work outside my classroom, I identify what materials were AI generated and with what tool, as that is part of the learning process as we grapple with integrating generative AI into the teaching and learning process. One key is to remind students that anything AI generated still needs human review before being used.
In conclusion, the question of whether instructors should disclose their use of AI in developing course materials does not elicit a straightforward answer. It encompasses broader themes of effort, innovation, ethical standards and the future of education. While some people advocate for transparency, others see the use of AI as a mere extension of existing pedagogical tools, not requiring special mention.
As we navigate this new landscape, it appears that at least one small group of students do not feel a need for instructors to identify if they used generative AI. But disclosing the tools used and how they were used can be instructional for students. Sharing effective, practical and ethical uses of these emerging technologies can be one more layer to an effective course.