You have /5 articles left.
Sign up for a free account or log in.

Academics share photos all the time. We use them on faculty profiles, to promote talks and events, and to mark achievements. Self-presentation research suggests good reasons for this. Images can help cultivate a human, trusting, positive and professional impression with prospective students, potential research participants and colleagues. A website full of smiling faces is more appealing than a list of faculty members’ names.

Increasingly, however, widespread and unrestricted availability of photos poses risks. For example, Madison Square Garden recently blocked employees from a law firm that was suing the Garden’s owners from attending events to which they had purchased tickets. How? By training a facial recognition system to spot them at the door via photos from the firm’s website. Images can also be aggregated to train generative AI systems, as in lawsuits currently faced by Stability AI and others after scraping millions of online images with ambiguous provenance. That can result in AI-generated images that closely resemble our actual images and threaten content creators’ livelihoods.

It’s not just AI. People see our photos via image search results, and, as a self-presentation researcher, I’d argue they are the foundation for web searchers’ impressions of us. They constitute an online profile, but among technology platforms, search results are the profile over which we have the least control. This can be especially consequential for folks undergoing significant transitions, like gender confirmation procedures, where old images could mean painful misgendering, confusion or discrimination. Public photos can also make it easier to recognize or even impersonate researchers targeted for harassment due to their identities and/or research areas. I have also been in uncomfortable meetings where colleagues suggested using online photos to ascertain others’ visible minority status, a veritable minefield of potential discrimination and misidentification. All of these possibilities led some countries outside the United States to support a right to be forgotten by search engines.

To summarize, our images: 1) may be inadvertently feeding the big data machines that so many of us criticize for being exploitative and 2) continue to lurk in search results, where they can be harmfully misinterpreted or misused. Those concerns felt to me like good reasons to explore how to better manage the use, availability and longevity of our online images. Could we get similar benefits while reducing the risks of misuse?

As a first step, I recently tried to get my Google Image search results to near zero. Removing content that I could edit directly was easy: I deleted photos from my personal and lab websites, as well as my LinkedIn profile. My personal social media profiles were already private. There were about 35 other photos in my search results, so I sent emails asking for them to be removed. In all, that meant sending 58 emails, including follow-ups, to 31 individuals, plus one phone call with a senior university administrator who preferred synchronous discussion. It happened over about six weeks, during which I checked my search results every morning, and I am grateful to everyone who helped with my image removal requests.

To be sure, it was a privileged exercise. Not everybody has the time, career security or resources to speak up or persevere in this way. In the spirit of Janet Vertesi’s Opt Out project, my goal was to opt out of public-facing online images to derive potential lessons.

Lessons From Opting Out

  1. You might not have the right to request removal. College and university media release forms vary in their specifics, but the ones I’ve seen: 1) are required when content is posted on the web and 2) assert rights to content in all media, in perpetuity and without recourse. That feels draconian and contrasts starkly with social platforms’ Terms of Service, which often essentially force users to cede control over distribution of their content until they delete it or leave the platform. Nobody turned down my requests to remove content (although two were ignored), but it seems they easily could have. Plus, the release form language may discourage reasonable people from even trying.
  2. Deleting is not as easy as it may sound. It’s easy to click “delete” on Facebook or Instagram and watch photos disappear, but the same wasn’t true here. Many well-intentioned people struggled to remove my content from their organizations’ websites, and one was surprised by my request because they thought it was deleted in 2021. Those struggles were both social and technical in nature. Socially, I was sometimes the first to make this request, so staff members wanted time and approvals to be sure removal was consistent with policy and precedent. In one case, that took four weeks. On the technical side, certain content management systems did not appear geared for easy deletion. Several people removed the links to my image from a page but did not delete the image file, which appeared in search results until I emailed again. Even then, some images lingered in inaccessible caches for hours or days. As many as 22 of the images disappeared from search results only after using Google’s Remove Outdated Content tool.
  3. Old images accumulate as de facto histories. Quite a few of the images I found were online residue of my past: newsletter mentions, awards, colloquium talks and so forth. Some dated back to the early 2010s. At the time that I shared those images, I assumed they would be used for promotion, but I never considered they might sit there forever. Some people even said they kept them online as a “historical record,” although everybody agreed to delete them if I preferred that. We have tended to treat websites as virtually free and infinite archival storage, but do the benefits of keeping histories in full public view outweigh the potential costs to the individuals portrayed?
  4. Most of the images I found were copies of the same file. That made removal far harder, because Google hides duplicate search results. Every time I thought an image was gone for good, Google showed an even more obscure instance of it next time. That reflects unnecessary redundancy and hints at better systems for sharing images.

What Can We Do?

Publicly available images aren’t unique to universities, and some images of us, such as those in newspapers, really are part of the public or historical record and may never come down (although this has sparked heated debate among journalists). Some people may choose to be more public and consent to highly visible photos on, say, their university homepage. They should also expect responsible stewardship over those images. That said, the majority of images I found simply hadn’t been removed or were duplicates. By combining policy change, elevated vigilance and improved infrastructure, we could eliminate a significant amount of the problem.

As some of the first inhabitants of the web, universities are well positioned to do this. In the early days of the internet, people shared everything freely, with few concerns about security. As threats loomed, we’ve adapted by making our networks, processes and information more secure, with solutions like eduroam and guest Wi-Fi access, single sign-on authentication, ORCID, and others. To look me up in my university’s online directory, for example, you must solve a Captcha to see my email address, and you must be on the campus network (or VPN) to see my login ID. But before my photo was removed from university websites at my request, it could be downloaded in the clear by any human or bot. Protecting images can be the next advance in our security trajectory.

Here are some steps individuals and universities can take to get our image problem under control.

  • Demand more control over your image. Your image online, and your search results, should be under your control, but they never will be if you sign away your rights in perpetuity and without recourse. Ask for more control. This could include options for fixed-duration licenses, deletion rights under certain circumstances and making some content available only to certain audiences or behind authentication.
  • Revisit media release forms. The draconian language of media releases likely dates from the print era, when it was expensive and infeasible to remove a printed image and publications had a shorter circulation life. That makes less sense in a world where content is easily mutable and lasts forever if nobody removes it, while social media platforms have enticed us to share content and imbued our expectations with promises of control and easy deletion. My effort further suggests that some organizations may not even assert the rights they claim. It’s time for universities to revisit media rights as access to online content broadens and expectations shift.
  • Build content systems that allow for easier control and deletion. Giving people such rights is not scalable or sustainable in the current model, where removal is at human discretion via email. We need better content-control infrastructure for individuals. Right now, that means developing content management systems that make it easy to delete image references and image file—and that they disappear quickly from server caches. In the longer term, it means building infrastructures for authentication-limited webpages at different levels (e.g., fully public, human-verified, broad university community, current faculty/staff/students). It could also mean image control systems where users could upload a current photo and then share authentication-protected and/or time-limited links to that photo for faculty profile pages and events. So if I’m visiting another campus, I’d send a link, and the resulting image would update every time I upload a new photo. And it would be visible only to, say, people on that university’s campus network or who have solved a Captcha and/or be set to disappear, say, one week after my talk.
  • Be vigilant about what is public. Public-facing websites are important, but they must be deliberate. They need not, and should not, exhaustively chronicle ephemera such as old newsletters, award winners and colloquium speakers, which can be reasonably restricted to those with a bona fide interest. Individuals and communication staff can establish routines for reviewing and removing content, moving older materials to authentication-protected sites and/or to the university archives to preserve the historical record. Plus, we can all keep an eye on our search results and request removal of residual content.

We can do this! And we should. As I said at the start, we have some great reasons to keep putting our images online. Proceeding with the current model of claiming far-reaching rights and then making everything freely available in perpetuity, however, is no longer safe, secure or smart. By demanding more inclusive and appropriate rights as individuals, and by building infrastructures to support those rights as institutions, we can adapt to yet another online security challenge.

Next Story

More from Career Advice