You have /5 articles left.
Sign up for a free account or log in.

Facial portrait of a young man with a network of lines on his face intended to depict ingredients for a facial recognition scan.

“Would faculty members or administrators like to be continuously surveilled to see if we were paying attention or checking our phones for potential messages from our children or people we care for that might indeed be time sensitive?” asked Kathleen Creel, assistant professor of philosophy and computer science at Northeastern University.

izusek/Getty Images

Last week, in a humanities class at a highly selective university in the Northeast, a student played The New York Times’ Spelling Bee game on a phone, according to another student who sat within view. (The source requested anonymity out of concern for retribution.) A third student worked on an essay for a different class on an open laptop, while a student a few chairs away rested their feet on the desk. Another checked LinkedIn and updated a résumé for “quite a while.”

“I think this prof is trying to do whatever she can to get good reviews because she brought cookies to class today and hasn’t said anything all term about people texting from their MacBooks during class,” the disappointed student wrote in a text after class had ended.

For faculty members who hesitate—for any reason—to assume responsibility for minimizing distractions in the classroom, Chafic Bou-Saba, associate professor for computing technology and information systems at Guilford College, is working on a solution. Bou-Saba is designing a facial recognition system for classroom management. Multiple cameras spread throughout the room will take attendance, monitor whether students are paying attention and detect their emotional states, including whether they are bored, distracted or confused.

“The use of the latest technology was the driving force behind this project,” Bou-Saba said of his research project. “The device will take the stress away [from the teacher] and will document student behavior, if needed, by taking five- to 10-second videos.” The instructor could then say, “Hey, here you are not looking, here you interacted with another student and here you are doing something else.” Bou-Saba hopes that, with training, the AI-powered software could also help detect how much the students are learning. He aims to test the system by the semester’s end.

Three white wall-mounted cameras intended for facial recognition surveillance.

Scharfsinn86/Getty Images

Facial recognition technologies exist in a largely unregulated ecosystem. Advancements have “outpaced laws and regulations, raising significant concerns related to equity, privacy and civil liberties,” according to a National Academies of Science, Engineering and Medicine report published last month. While some computer scientists are at work innovating with facial recognition for educational purposes, some scientists and technology ethicists question the soundness of their use to foster learning.

“Engineering has more of a history of building and certifying stuff to go out in the world,” said Erik Learned-Miller, chair of the faculty and professor of computer science at the University of Massachusetts at Amherst. “But computer science has been a little bit more Wild West–y.”

Learned-Miller is aware of other computer scientists, including colleagues at his university, who are developing facial recognition systems for use with students in class. But he is in “no particular rush” for the technology to surface in learning environments.

“It’s not fun to be under the microscope all the time,” Learned-Miller said. “You have to be very careful of the stress that surveillance puts on students.”

A (Brief) Primer on Facial Recognition

Facial recognition technologies use statistical measurements of a person’s face to identify them against a digital database of faces. These AI-powered systems offer fast, often accurate and automated identity verification. Without this technology, identification can be significantly slower and often infeasible.

Many people use facial recognition technology to unlock their personal cellphones. Passport control officials use it to screen international visitors. Some stadiums also surveil crowds to identify malicious actors at sporting events. At her concerts, Taylor Swift tracks stalkers with facial recognition technologies.

In higher education, facial recognition applications for surveillance sit against the backdrop of such use in society. More than 3,100 U.S. agencies, including the FBI and Department of Homeland Security, reportedly use Clearview AI—a breakthrough product that was trained on billions of images, many of which were harvested illegally. The online tool delivers far superior results over government databases of mug shots and driver’s license photos. But civil rights advocates argue that when facial recognition tools are routinely deployed on humans who have not been accused of crimes, people exist in a “perpetual police lineup.”

In recent years, some colleges have adopted online proctoring software and services, though not without controversy. Such practices surged during the COVID-19 pandemic, given new stresses and new temptations for cheating, but so did pushback. Cleveland State University, for example, was found to have violated a student’s privacy in 2022 when a remote proctor required a webcam scan of the student’s room before a remote test.

Facial recognition technology also has a history of unintended harms and intentional misuse, which, combined with its low barrier to entry, makes it problematic. False positives and false negatives have led to the wrongful arrests of Black individuals, for example. Also, many applications, including in educational settings, raise a range of as-yet unanswered legal, social and ethical concerns, according to the National Academies report and technology ethics scholars Inside Higher Ed consulted.

“Some uses of [facial recognition technology] may well cause such concern that they should be not only regulated but prohibited,” the National Academies report stated. “Currently, with few exceptions … the nation does not have authoritative guidance, regulations or laws that adequately address these concerns broadly.”

To mitigate potential harm, the report calls on the legislative, judicial and executive branches of the U.S. government and national and international standards organizations to provide regulation that answers questions such as: Who may use facial recognition technology? Where may it be used? What uses are acceptable? When is it acceptable to use results provided by a third party? Their list of questions goes on.

The report authors suggest that the federal government consider prohibiting surveillance applications—either mass or individual—for all but “properly authorized law enforcement or national security purposes.” If the government were to follow through, facial recognition applications in classroom settings would be banned.

It’s Not (Yet) Illegal in Class. But Is It Educationally Sound?

Facial recognition applications have stirred controversy in all parts of the research, development and deployment pipeline, according to Kathleen Creel, assistant professor of philosophy and computer science at Northeastern University. In many cases, data sets of faces used in training the systems before deployment have been collected passively from security cameras or scraped from the internet without consent. In other cases, systems are deployed before problems of racial and gender bias have been addressed.

Ellucian, an educational technology company, argues that “facial recognition can give students better service and security” by rendering the need to carry a student ID obsolete. The firm also highlights that the technology offers faculty and administrators “on-the-spot classroom analytics based on audience reactions during a lecture.”

But some academics question the soundness of the technology’s educational value. Preventing students from looking at their phones, for example, may not equate with achieving educational goals.

“It’s circumventing an authentic connection to learning and replacing it with a fear of punishment,” Creel said.

Nir Eisikovits, philosophy professor and founding director of the Applied Ethics Center at the University of Massachusetts at Boston, agrees.

“Introducing a surveillance mindset into a learning environment seems to be terrible idea for collaboration and trust in the classroom,” Eisikovits said.

But Bou-Saba disagrees.

“I’m developing these tools to improve student experience in the classroom,” he said. “These tools will be integral to education in the near future … The positives outweigh the negatives.”

Is In-Class Facial Recognition Ethical?

Bou-Saba is concerned about his students’ privacy. For this reason, he has designed his system to log information and video in an encrypted local database. He has also promised students, “at least at this stage,” that none of their images will be stored online. Concerning the National Academies report, he added, “there’s always a game of mouse and cat with technology and the law.” He is also aware of the potential for a for-profit company to monetize his research.

“There is absolutely a big chance of that happening,” Bou-Saba said. “We have to be open-minded about what could happen.”

But given historical trends of the business sector monetizing research into lucrative tech products they sell back to colleges, Eisikovits has some concerns. For example, educational technology companies typically collect individual or aggregate data about students who use their products. Such companies may be incentivized to then market distraction-reduction and focus-enhancing software solutions for students, Eisikovits said.

But even local, homegrown facial recognition systems for education concern some technology ethics scholars, in part because of student-faculty power imbalances.

“Would faculty members or administrators like to be continuously surveilled to see if we were paying attention or checking our phones for potential messages from our children or people we care for that might indeed be time sensitive?” Creel asked.

Even when students must opt in to in-class facial recognition technologies, Eisikovits questions whether such agreements are meaningful.

“Does it look like the terms and conditions usually look, which people don’t usually read before they assign an agreement?” Eisikovits asked.

Regardless of whether a facial recognition system for classroom management is designed locally or provided by a third party, decision-makers could aggregate the data and draw conclusions about profiles of students that are problematic. That might narrow the definition of what constitutes an “acceptable” student, which could further marginalize already marginalized students, according to a 2020 University of Michigan report. For example, a conscientious parent may skip class out of concern that they’ll be punished for checking a phone for messages when a child is sick. Similarly, an engaged neurodivergent student may fear discipline for behaviors beyond their control.

The report’s authors strongly recommend a ban on facial recognition applications in educational settings. For institutions that nonetheless proceed with its use, the authors suggest that campus leaders evaluate the ethical implications, given the absence of state and national guidance. They also write plainly, “Do not use [facial recognition] systems to police student behavior.”

Finally, AI system outputs often lack explainability. For that reason, drawing high-stakes conclusions that may impact students is ethically fraught.

Where Does Higher Ed Go From Here?

In 2020, U.S. college students on four dozen campuses staged protests and posted online petitions against the use of facial recognition systems. Those protests, however, largely addressed on-campus surveillance for security purposes. In-class facial recognition applications designed to foster student learning are a more recent addition to the learning landscape.

“There was a rising consensus for a moment there that we shouldn’t use these technologies in this passive, monitoring way,” Creel said. “I worry that that consensus is falling apart—not because any of the underlying fundamentals have changed.”

Creel’s concern is grounded in a political reality. The European Union, for example, had considered an unconditional ban on facial recognition technologies by governments and private sector actors in its Artificial Intelligence Act. But the final draft of the act does not include a ban.

Until U.S. laws and regulation catch up with the facial recognition advancements, higher education leaders might proceed with caution, given the potential for these AI systems to “exacerbate racism, normalize surveillance, erode privacy, narrow the definition of the ‘acceptable’ student, commodify data, and institutionalize inaccuracy,” according to the University of Michigan report.

“Computer scientists might ask themselves not only ‘How?’ questions but ‘Why?’ ‘What is it good for?’ and ‘What does this promote?’ questions,” Eisikovits said.

Some in academe remain open to possibilities.

“Maybe somebody will show that there’s a really great, lightweight way to use this stuff in an educational setting that doesn’t bother the students and really provides a great value to the teacher,” Learned-Miller said. “If they can show that, I’m not against it. But there’re a lot of other ways of making schools better, and I would probably go down those paths first.”

But the disappointed humanities student at the university in the Northeast is unconvinced.

“The reason my classmates are in [this professor’s] class is because she has a chill reputation. They wouldn’t be in the class if she didn’t,” the student said. “At the end of the day, students have to decide that they’re invested, and no amount of technology is going to help with that.”

Next Story

Found In

More from Teaching & Learning