You have /5 articles left.
Sign up for a free account or log in.

A drawing of a human brain made of gray jigsaw puzzle pieces with a single red piece that bears the acronym "AI," illustrating the concept of artificial intelligence.

alexsl/Getty Images Signature

For those not already working in the field of artificial intelligence, the rise to prominence of generative AI platforms—primarily, ChatGPT—seemed to happen overnight in November 2022. Since that time, institutions of higher ed have been responding in (understandably) reactive ways. In many colleges and universities, newly assembled institutionwide committees developed guidelines to help individual faculty members establish course-level policies.

Simultaneously, many centers for teaching and learning swiftly deployed faculty development programming to support instructors trying to familiarize themselves with these new platforms while ameliorating concerns about academic integrity. Programs included listening sessions to capture faculty concerns, platform-specific overviews (on ChatGPT, DALL-E 2 and AlphaCode, to name a few examples), and assignment-design workshops. 

Though necessary and appropriately reflective of the triage-like prioritization of institutions’ immediate concerns, these initial responses were reactive and circumscribed, focusing primarily on assessment methods and academic policy. Now that the faculty is becoming more familiar with generative AI platforms, experimenting with integrating the use of these platforms into teaching and understanding the highly discipline-specific implications, it is an optimal time for colleges and universities to shift to a more proactive, scaled and systematic response.

Specifically, I suggest now is the time to move toward a program-level response, one involving the collaborative articulation of program-specific learning outcomes relative to what students should know about if, when and how generative AI should be used in field-specific academic and professional contexts.

The Rationale for a Program-Level Curricular Response

As institutions of higher education, we have reached the point where we need to engage faculty in discussion of these critical questions:

  1. What do we want the students in our academic program to know and be able to do with (or without) generative AI?
  2. At what point in our academic program—that is, in what specific courses—will students learn these skills?
  3. Does our academic program need a discipline-specific, program-level learning outcome about generative AI?

Codifying the answers to these questions in academic programs’ respective curricula is essential as the contexts in which our educational institutions operate continue to evolve. Academic programs—and those who design, deliver and support them—will suffer if we do not adapt to the shifting academic, technological and professional landscapes that we shape and by which we are shaped in turn.

There are numerous benefits to engaging academic departments in a consideration of how generative AI can or should inform an updating of their academic programs’ curricula. These include:

  • Transparent expectations for students. Most importantly, establishing clear, program-level expectations for what students should know and what they should be able to ethically accomplish with (or without) generative AI is an essential step in promoting student learning and success. The costs of not having such expectations—or allowing them to remain ill defined or variable across sections of a course—are myriad, potentially impacting students’ motivation, anxiety, performance and persistence.
  • Stronger program identity and efficiency. Not having clear, program-level expectations—and relegating all curricular decisions to be made at the course level—renders academic programs more vulnerable to the potential for significant gaps or points of overlap in their curriculum. By having clear expectations for students regarding the discipline-specific knowledge and applications of generative AI they should develop over their course of study, an academic program can demonstrate its commitment to staying relevant and responsive to the needs of its broader academic/professional field. In so doing, a program can strengthen its clarity/brand, efficiency, impact and marketability.
  • Mitigation of faculty anxiety and feelings of isolation. To this point, the emerging universitywide guidelines on generative AI seem to focus on course instructors as agents of change, saddling individual instructors with the burden of deciding on course-level academic policies and revising course-level curricula. This has the potential to be an incredibly isolating and stressful endeavor for individual faculty members (particularly new faculty or those dealing with departmental politics that may have implications for promotion or tenure).
  • Opportunities for intellectual stimulation, professional growth and gratification. Beyond any considerations of why academic departments might feel obligated (as rightful authors and stewards of curriculum) to update their program-level curricula, there is great potential for the revision process to be, daresay, fun. The intellectual challenge of thinking about how generative AI can and should shape an academic field—its curriculum, scholarship and applications beyond the classroom—may be incredibly gratifying and invigorating. Furthermore, engaging departmental colleagues in this endeavor may achieve what might, in essence, function as a learning community of dedicated educators, bolstering an academic department’s sense of identity and belonging.

An Approach to Articulating a Program-Level Curricular Response

Once academic programs commit to having a logical scope and sequence for introducing those skills associated with the responsible and discipline-specific use of generative AI, they will need a clear curriculum-revision process. For example, Cornell University’s Center for Teaching Innovation, where I work, offers a four-step approach to engaging academic programs in curriculum mapping. This approach calls for programs to:

  1. Identify goals and an approximate timeline for their revision effort.
  2. Affirm their existing program-level learning outcomes and determine if a generative AI-specific outcome is needed. For example, a department might decide to revise their ethics-related program-level learning outcome to reflect a commitment to the ethical use of generative AI in academic or professional settings.
  3. Map their curriculum (collaboratively), identifying when and where generative AI-related outcome(s) will be introduced, reinforced and mastered/assessed.
  4. Develop a plan for designing any new courses/course materials that are needed.

Ideally, academic departments will be able to turn to support staff (e.g., those based in centers for teaching and learning, assessment offices, etc.) who can assist with this process, offering professional development and consultative or facilitation support as needed.

An Opportunity to Adapt and Model Responsiveness

This call to shift from a reactive to proactive response to the ongoing, seismic impact of generative AI on society is as exciting as it is daunting. The opportunity to model the innovation that is warranted in the wake of generative AI’s disruptive force requires strategic planning, consensus building, respectful discourse, humility, a commitment to learning and a thoughtful application of discipline-specific expertise. The benefits of formulating a program-level, curricular response to the disruption of generative AI justify the coordinated effort and planning such a proactive endeavor requires.

Kathleen Landy is an associate director of the Center for Teaching Innovation at Cornell University.

Next Story

Written By

More from Views