You have /5 articles left.
Sign up for a free account or log in.
FotografieLink/iStock/Getty Images Plus
Rafael Moron and Lexy Modrono were used to their professors at Florida International University either glossing over policies concerning the newly emerging uses of generative artificial intelligence or avoiding discussions of AI entirely.
“In very few classes it was discussed,” Moron, who graduated from FIU in May, said. “Most of the time, the policy would be a prohibition of AI, and if it was used, it would be classified as plagiarism—plain and simple.”
FIU has a general AI policy that is very similar to its plagiarism policy. The vast majority of universities have not implemented any AI policies, according to a survey of provosts from Inside Higher Ed earlier this year.
So Moron and Modrono were surprised when they and a dozen other FIU students were asked to come up with their own AI guidelines for a Rhetorical Theory and Practice class earlier this year.
“I definitely was a little surprised, because ever since AI became more accessible, I feel professors are really strict with it,” Modrono said. “So knowing we had a voice in what the policy was going to be was surprising.”
Christine Martorana, an associate professor of writing at FIU, has spent two semesters allowing students in multiple courses to create their own policies governing AI use.
“Trying to police AI use is counterproductive,” she said. “As a professor, that’s not the stance I want to take, and that’s not the relationship I want to have with my students. I was trying to create a policy and there were so many ways it could go; it became, ‘Let’s share this with the students and see what they come up with.’”
Students in the spring semester were broken into small groups to come up with what they believed were best practices, which they then presented to the class at large to fine-tune their ideas. In a summer course, with its shorter time frame, Martorana had students look at the spring semester policy and make tweaks to create their own.
“I feel like for me personally, I definitely felt more valued as a student,” Modrono said. “I felt she acknowledged that we’re responsible students and that we know what we’re doing.”
Common agreements emerged—namely, that you cannot use AI to plagiarize—but differences also cropped up. Students in the spring semester course, for example, determined that it was OK to use AI to brainstorm, but students in the summer section decided that brainstorming would only be allowed if a student was on her own and not in a classroom setting with peers. The spring semester students said generative AI could be used to help organize a paper, while the summer course said the technology should not be used to help with outlining.
The policies across both semesters covered how to use AI in courses and how to cite AI usage in papers and other course materials.
Martorana, recognizing that AI will be “an inevitable part” of writing and communicating in the future, said she found the policymaking a useful way to prepare students for that future.
“I wanted to get them to have a buy-in,” she said. “I wanted them to, one, understand [AI], and two, follow it, because it’s something they helped create.”
Brianna Dusseault, principal and managing director at the Center on Reinventing Public Education, said while she has not heard of other professors asking students to weigh in on AI policies specifically, it’s a tactic that professors—even elementary school teachers—have deployed with their classes, asking them to come up with general policies.
“You set norms and create assumptions together over the year,” she said. “This is a new domain in AI, but this kind of assignment where you bring students in to co-create their learning environment, it would make sense.”
Dusseault, whose center is currently conducting studies on how faculty use AI, pointed to its research (and others’) showing that professors have a generally lower AI adoption rate than students.
“This is an example of a professor playing a role that universities writ large may not be ready to do,” she said. “We’re still trying to get the adults to understand it, much less the students.”
Both Dusseault and Martorana said that having students help create AI policy could help boost AI literacy, noting the amount of research students had to conduct on the ethical—and unethical—uses of the technology. Martorana added that the conversation surrounding AI ethics bled into discussions throughout the entire semester, with students asking if their use of AI fit into the policies they created.
“I’ve been teaching since 2008 and I’ve never had students ask about ethics and academic integrity,” she said. “For me, that suggested students continued to think about it throughout the semester and that the line of conversation was more open.”
Martorana will continue to allow students to create their own AI policies this fall, expanding from her upper-level courses to first-year students as well.
“If you’re trying to police AI use, you’re ultimately fighting a losing battle,” she said. “As AI tech continues to advance, our policies need to take a more productive and positive approach, versus ‘Here’s what you should not do,’ and instead show, ‘Here are the ways we can use it in our course.’”