You have /5 articles left.
Sign up for a free account or log in.

Female hands working on a laptop, close up.

Cornell University’s Center for Teaching and Learning published a report on best practices and policies for faculty members around generative artificial intelligence in the classroom.

Maksym Belchenko/iStock/Getty Images Plus

Higher education practitioners have been grappling with the threats and benefits of assistive artificial intelligence technology, like ChatGPT, for the past year, unsure of whether to shun or embrace the new tools.

A new report from Cornell University’s Center for Teaching and Learning, “Generative Artificial Intelligence for Education and Pedagogy” suggests instructors adopt one of three policies—prohibit, allow with attribution or encourage generative AI (GAI) use.

Each decision takes into account academic integrity, accessibility, privacy and how the different frameworks can be relevant to different disciplines, according to the report.

The group produced three recommendations:

The Background

Cornell administration created a committee to establish guidelines and recommendations for GAI use at the university in spring 2023. The committee had three goals:

  1. Evaluate the feasibility, benefits and limitations of using AI technologies in educational settings and how they impact learning. 
  2. Assess the ethical implications of using AI tech in the classroom. 
  3. Identify best practices for integrating AI tech into curriculum and methodologies, including recommended guidelines for safe and effective use and how to evaluate and improve AI use.  
  • Rethink learning outcomes. For many students, their careers will be in GAI-enabled industries soon, which professors should consider in their teaching and learning outcomes. Leaders should prioritize higher-level objectives, critical thinking and skills for future careers.  
  • Address safety and ethics. Current AI technologies have pitfalls, and professors should share those with students to approach GAI critically and validate the information AI produces.  
  • Explicitly state policies for GAI. Expectations for use of GAI in assignments and classes should be clearly and consistently communicated to students. Examples of expectations include when it is allowed, what uses are considered violations of academic integrity, how the technology use should be attributed and how students should validate information. The group discouraged use of GAI for student assessment and the use of automatic detection algorithms for academic integrity violations (like Turnitin).  

For higher education leaders looking to establish policies or for professors unsure of how they should address GAI in the classroom, the Cornell committee described three frameworks:

Prohibiting GAI: AI has the greatest potential for academic integrity violations when students use the tech to complete assignments meant to develop skills such as problem sets or research essays.

To avoid the risk of academic integrity violations, professors can move to assessment methods less suited to GAI tech, such as timed oral and written exams or in-class written essays. Professors can also modify assignments to be more specific to class content.

However, a move to in-person assessment and assignments should also take into account policies and practices for accommodating students with disabilities or other marginalized students, such as English language learners.

At present, few tools are capable of regulating AI usage in the classroom through AI detectors, because they rely on statistical analysis. Relying on these tools could result in unfair academic integrity violation accusations and create distrust between instructors and students, which would damage the learning environment, according to the report.

Professors should also emphasize why completing assignments without assistive AI is necessary to meet learning outcomes, why each learning outcome is important for academic and personal growth, and why integrity violations are harmful to the individual and the larger academic community.

Responsible use of GAI: Generative AI will likely have a place in many students’ lives after graduation, so it is the responsibility of the institution to share ethical and productive uses of GAI when appropriate. The tools can also support students with disabilities, particularly those who experience difficulties with cognitive processing.

Some examples of responsible GAI in the classroom include:

  • Supporting writing skill development in planning, outlining, editing and individualized feedback. Guided use of GAI should be encouraged to decrease the potential of plagiarism and increase creativity and originality in writing. 
  • Technical and mathematical courses may benefit from GAI use, including generating text to synthesize code for data analysis and visualizations. Sometimes GAI can “hallucinate,” making problems inaccurate or circular in reasoning, so instructors should be aware of system limitations.  Programming courses can adapt GAI for checking code, generating ancillary tools and creating explanations for code to text. Students may rely too heavily on the tool and not learn the skills necessary to generate working, understandable and updatable code, so GAI should be used as a tutor, not a sole creator of code. 

Do you have an academic success tip that might help others encourage student success? Tell us about it.

Next Story

More from Academic Life