You have /5 articles left.
Sign up for a free account or log in.

In 1979, as a toddler, I was evacuated with my family from our home a short drive from the Three Mile Island nuclear plant after its partial meltdown. My memories of the incident are faint. And the health and environmental impact of the accident on my community was mercifully limited. Still, the specter of that disaster has haunted me since. Whenever I return home to Pennsylvania, the towering cooling stacks stand as a stark reminder of what could have easily been a catastrophe. In recent months, however, the structures have taken on a new meaning: They are less a reminder of the past and more a portent of what’s to come. I now see parallels between the looming towers and another technology reshaping our world: artificial intelligence.

If that seems like a stretch, think again: After seemingly being shuttered for good, the Three Mile Island plant is now slated to reopen to support Microsoft’s AI initiatives. With the deal dependent on federal tax breaks, critics say taxpayers are now forced to fund the resurrection of a site known for the worst nuclear accident in U.S. history, putting the health and safety of the local community at risk. This dynamic—in which communities downwind of industrial progress are left to bear its costs—is as old as industrialization itself. With AI, however, the stakes are global and even more immediate, posing disproportionate and significant risks to marginalized communities around the world.

As higher education grapples with the adoption of this technology, a growing number of colleges and universities are beginning to think deeply about AI equity. But much of this important conversation focuses on ensuring affordability and accessibility. These are critical goals. Yet, this narrow lens overlooks systemic and community-level impacts, especially the environmental and social costs felt disproportionately by low-income communities, people of color and Indigenous populations. Education and technology leaders need to broaden the definition of equitable AI.

The environmental impact of AI is already alarming. AI data centers require enormous amounts of electricity and drive up energy costs for nearby residents. Building and maintaining this infrastructure can disrupt ecosystems and displace communities. Data centers are often built on cheap land in areas home to low-income or Indigenous populations. These communities, already vulnerable due to historical exploitation and systemic neglect, are further burdened by environmental degradation. A more equity-oriented approach requires looking beyond the technology’s applications and embracing frameworks and research methodologies that better account for environmental and community impact.

The path forward demands collective action. Moving forward, AI policies will need to incorporate input from urban, rural and marginalized communities, ensuring not only that the technology’s benefits are equitably distributed but that the harms are minimized. This requires federal and state investment in both environmental research and sustainable technology. Policymakers, technology companies and higher education institutions can work together to create an approach to AI development and implementation that prioritizes equity.

There are also promising developments to suggest that educators are ready to take a more humane and ethical approach to AI design and implementation. For example, the Maryland HBCU Morgan State University hosts a National Symposium on Equitable AI as part of its Center for Equitable Artificial Intelligence and Machine Learning Systems, signaling a commitment to ensuring AI benefits diverse populations. Similarly, Stanford University’s Institute for Human-Centered Artificial Intelligence focuses on advancing AI research that prioritizes ethical considerations, social impact and inclusivity.

The Massachusetts Institute of Technology Schwarzman College of Computing, meanwhile, is integrating ethics and equity into curriculum through its Social and Ethical Responsibilities of Computing initiative to ensure future technologists consider the societal consequences of their innovations. And organizations like Black in AI are creating spaces to increase the representation of Black professionals in the industry, ensuring their perspectives and experiences shape the field.

These are all promising examples of how institutions can ground AI development in equity, ethics and inclusion, but individual efforts aren’t enough. Policymakers, too, have a profound opportunity, and a responsibility, to create the guardrails needed to ensure AI respects the rights and dignity of underserved communities. Policies like the proposed Algorithmic Accountability Act push companies to assess and address bias in algorithms, while Illinois’s AI Video Interview Act helps ensure transparency and fairness in AI-driven hiring. New York City’s Local Law 144 requires bias audits of automated hiring tools, and California’s Consumer Privacy Act grants individuals more control over their data, addressing concerns about AI misuse.

AI has the potential to shape the future in profound ways, and few sectors stand to benefit as much from the technology as education. But without deliberate action, AI risks exacerbating the inequities it could help solve. College leaders have a responsibility to be a loud and persistent voice of reason in discussions around artificial intelligence. Let’s ensure the progress this evolution drives is not only technological but human.

Meacie Fairfax is strategy director for Complete College America.

Next Story

Written By

More from Views