Social Media, Privacy and Technological Change

Scott McLemee reviews new and forthcoming titles from university presses that take up these interconnected subjects.

March 23, 2018
 
 
iStock/bizvector

Things have been rocky of late in the public’s love affair with our ever more sophisticated gadgets. The troubles have been building up for a while: cyberbullying, revenge porn, Twitter-bot mayhem … Last month, Amazon’s Alexa started randomly “laughing at users and creeping them out,” while Facebook’s vast and mostly unaccountable power would have made a #DeleteFacebook campaign inevitable even without reports of a massive data breach. And the week started with the death of a pedestrian hit by a self-driving car -- an eventuality that no doubt crossed most people’s minds immediately upon hearing the words “self-driving car” for the first time.

The relationship isn’t over -- even if, from the human side, it often seems more like a case of Stockholm syndrome than a romance. Several new and forthcoming titles from university presses take up the interconnected subjects of social media, privacy and technological change. Here’s a brief survey; quoted material is taken from publishers’ catalogs and websites.

Originally published in Germany, Roberto Simanowski’s Facebook Society: Losing Ourselves in Sharing Ourselves (Columbia University Press, July) maintains that social media “remake the self in their [own] image” by conditioning users to experience their own lives as raw material for “episodic autobiographies whose real author is the algorithm lurking behind the interface.” Appearing in English two years and billions of likes later, it will presumably find readers with an even more attenuated “cultural memory and collective identity in an emergent digital nation.” Lee Humphreys’s The Qualified Self: Social Media and the Accounting of Everyday Life (MIT Press, March) offers an at least implicit dissent by arguing that “predigital precursors of today’s digital and mobile platforms for posting text and images” (e.g., diaries, pocket notebooks, photo albums) have allowed people “to catalog and share their lives for several centuries.” Hence, our “ability to take selfies has not turned us into needy narcissists; it’s part of a longer story about how people account for everyday life.” Perhaps, though, the options aren’t mutually exclusive, as the author seems to imply.

The disinhibiting effects of online communication are well established. Moderation often seems to be exercised after the fact, when conducted at all. (A death threat is taken down eventually; lesser forms of harassment may enjoy the benefit of the doubt.) But according to Tarleton Gillespie’s Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (Yale University Press, June), that is changing with the rise of a powerful but normally inconspicuous layer of digital operatives. Content moderators -- those “who censor or promote user-posted content” on social-media platforms -- have tools “to curb trolling, ban hate speech, and censor pornography” that can also be used to “silence the speech you need to hear.” Their role “receives little public scrutiny even as it is shapes social norms,” with “consequences for public discourse, cultural production, and social interaction.”

And it’s easy to imagine content moderation becoming a much faster and more discriminating process when combined with the disruptive technology discussed in Terry Sejnowski's The Deep Learning Revolution (MIT, May). The author, “one of a small group of researchers in the 1980s who challenged the prevailing logic-and-symbol based version” of artificial intelligence, helped develop “deep learning networks” capable of not just extracting and processing information but of “gradually acquiring the skills needed to navigate novel environments” -- as exhibited by, for example, driverless cars. Which is a touchy subject just now, but give it time: Sejnowski predicts the development of, among other things, “a personal cognitive assistant will augment your puny human brain.” By that point, I fear, the driverless cars will start running us over on purpose.

Meredith Broussard makes the case against “technochauvinism” -- defined as “the belief that technology is always the solution” -- in Artificial Unintelligence: How Computers Misunderstand the World (MIT, April). With a series of case studies, the author “uses artificial intelligence to investigate why students can’t pass standardized tests; deploys machine learning to predict which passengers survived the Titanic disaster; and attempts to repair the U.S. campaign finance system by building AI software.” Clearly no Luddite, she stresses the need to recognize both the power and the limits of our technology, however smart and responsive it may become.

Our devices possess no sense of privacy. On the contrary, “popular digital tools are designed to expose people and manipulate users into disclosing personal information,” as Woodrow Hartzog charges in Privacy's Blueprint: The Battle to Control the Design of New Technologies (Harvard University Press, April). It’s time for “a new kind of privacy law responsive to the way people actually perceive and use digital technologies” -- and new regulations to “prohibit malicious interfaces that deceive users and leave them vulnerable” and “require safeguards against abuses of biometric surveillance,” among other things.

Two other books, also from Harvard, trace the historical vicissitudes of privacy. Sarah E. Igo’s The Known Citizen: A History of Privacy in Modern America recounts how, between the 19th and 21st centuries, “popular journalism and communication technologies, welfare bureaucracies and police tactics, market research and workplace testing, scientific inquiry and computer data banks, tell-all memoirs and social media all propelled privacy to the foreground of U.S. culture.” But establishing laws in defense of privacy -- defending the individual from “wrongful publicity” -- also yielded the unexpected consequence Jennifer E. Rothman analyzes in The Right of Publicity: Privacy Reimagined for a Public World: “Beginning in the 1950s, the right transformed into a fully transferable intellectual property right, generating a host of legal disputes …” It “transformed people into intellectual property, leading to a bizarre world in which you can lose ownership of your own identity.” (Both Igo’s and Rothman’s volumes are due out in May.)

While social media foster the tendency for individuals to think of their own personalities as brands, the trend in the business world has run in the other direction: well-established brands are just as susceptible to a sudden reversal of reputation from a few hostile tweets as any junior-high student or member of the White House staff. “With citizens acting as 24-7 auditors of corporate behavior, one formerly trusted company after another has had their business disrupted with astonishing velocity,” according to James Rubin and Barie Carmichael’s Reset: Business and Society in the New Social Landscape (Columbia University Press, January). Offered as “a strategic road map for businesses to navigate the new era, rebuild trust, and find their voice” by “proactively mitigating the negative social impacts inherent in their business models, strategies, and operations,” Reset will be of interest and use to corporate executives until such time as they are replaced by our AI overlords.

And with that in mind, two books with rather cataclysmic titles bear notice. Small Wars, Big Data: The Information Revolution in Modern Conflict by Eli Berman, Joseph H. Felter and Jacob N. Shapiro with Vestal McIntyre (Princeton University Press, June) argues that “an information-centric understanding of insurgencies,” benefiting from the accumulation of “vast data, rich qualitative evidence, and modern methods,” is superior to conventional military methods. In a more figural vein, Justin Joque’s Deconstruction Machines: Writing in the Age of Cyberwar (University of Minnesota Press, February) presents “a detailed investigation of what happens at the crisis points when cybersecurity systems break down and reveal their internal contradictions,” with cyberattacks “seen as a militarized form of deconstruction in which computer programs are systems that operate within the broader world of texts.” That sounds abstract, but it could just be that our commonplace notions of warfare are out of date.

Read more by

Be the first to know.
Get our free daily newsletter.

 

Back to Top