The Information Cage
TECHNOLOGY | October 23, 2010
Marketers Can Glean Private Data on Facebook
This issue of targeted advertising is bigger than I originally thought.
Back in the day, if I were on the Internet in Ithaca, advertisers showed ads for businesses in that area. When I went to Rochester to see family, POOF! The ads magically changed to Rochester businesses and addresses. And so on.
The information in the two papers reviewed for this piece in The New York Times  has caused the scales to fall from my eyes. Targeted advertising has become significantly more intrusive than benign Internet Protocol address recognition.
But the key to its intrusion, reader take note, is that it functions on social networking sites. It is like an open house party: you invite a few friends but at any given moment you have no or little control over who comes in the door. If the unknown guests vandalize your house or steal something when no one is looking, it is almost impossible to identify the responsible parties or obtain relief. You have a choice, then, to invite "friends" in.
To what kind of behaviors does this analogy owe comparison to vandals and thieves? Personal portfolio pinpointing aggregators are “stealing” your identity. In the same sense as physical property, however? No. Our legal system lacks a qualification that protects identifying characteristics from mining and re-combining that aggregators such as ChoicePoint or targeting advertisers do. So, too, does it not have a quantification of material worth for that information.
Many a famous law professor, including Lessig and Samuelson, have argued that monetary value should be placed on personal information -- not just the limited set of personally identifiable information about which we hear so much in the data breach context, but all information that is about the person. It that way the information can enter into the marketplace and be sold and exchanged like any other commodity. It is, in short, a market solution.
Privacy experts, not entirely comfortable with that idea, have suggested that the law utilize fair information practices. Any individual or entity gathering the information would be required to let the user know what information is gathered, why, to what purposes it will be used, how long it will be kept, what administrative, technical and physical security safeguards will be applied to it, how the person has access to that information and a process by which to correct mistakes. That would be the legal option.
Good luck! In contrast to much of the developed world, the United States reveres too much its market drivers to consider this extensive regime as an option. Our “sectoral” laws, as they are called in comparison to the European Union’s comprehensive “directives,” protect pieces of information about us: patient health care records, education records, some financial information. Most states protect library circulations records. And then there are the little snippets, the result of high profile cases, such as video lending records … but with business delivery methods changing so rapidly it is hard to know whether my streaming choices on Netflix is protected to the same degree as my movie choices at Blockbuster were when it was still in business. And BTW, what happened to those records when they did go under? No law speaks to this point of how they were retained, secured or destroyed. And these are all pieces, not a coherent framework.
Technology, of course, changes the game before the laws catch up. New HTML code will put the “cookies” of today into a digital museum. We will look at them as we do unsophisticated archeological artifacts. “Cool, an arrow head! They actually fed and clothed themselves on that?!?” “Cool, a ‘cookie!’ Internet companies actually did business with that?!?”
And then there is this new method of marketing profiling. Should anyone be surprised that the advertisement industry has concocted a more dynamic system than cookies? Not really. Cookies are so Web 1.0. Dynamic ad targeting – a name I am giving it in absence of any other, whereby advertisers use information that the individual provides not only to select ads, as currently occurs, but then track whether the individual links to those ads as a follow up means of honing the profile. How Web 2.0.
Why is this method disturbing? For starters, it suggests a slippery slope of invasiveness whose next step is unknown. Even if this one is only another predictable step in targeted advertising, the law provides no endpoint. Information about the individual is not protected, and our commercially driven society shapes social norms in such a way that almost for nothing, and sometimes for nothing indeed, people give information about themselves away. Or better yet … have it taken without their notice.
The inclination to provide information is so pervasive that it creates an expectation that users and customers will provide it. Suspicion arises when they don’t! Take, for example, the case of a mortgagee or other lender. If they don’t have ready access to information about a mortgagor or borrower’s assets, they may in a lender’s market may preference those that do. Unbeknownst to them, the more cautious person is at a disadvantage to obtain credit or a loan.
More problematic is the recognition that the profiles build off of dynamic ad targeting rest on time-locked assumptions. Advertisers gleam sexual preference from Facebook profiles. They use that information to place targeted ads, for example information about nursing schools to men identified as “gay.” If the individual profiled hits the link, that action strengthens the assumption no matter what the real person’s preferences are or whether they change over time. Maybe it was not even that person but rather an old friend from college or a cousin visiting for the weekend who used both computer and Facebook page. What is real in this scenario? The profile of the person – which looks confined and certain as defined by any variety of categories (race, class, ethnicity, sexual preference, etc.) – or the person, who is fluid, changeable and perhaps purposefully not defined at all or well by these categories?
Identity politics went out of fashion in the 1990’s. Well they should, because those politics alienated so many people, including on the left from whence the notion originally came, with its claustrophobic containment of people into racial, ethnic and gender categories. How ironic that it returns from the right, where business, as a rule, lies on the political spectrum. In neither political configuration does it serve the individual or the society. Categorization appears to be an integral part of intellectual and social development. Male/Female. Good/Evil. Insider/Outsider. Privileged/Not Privileged. Humans could not function without categories in language or life. But to recognize that fundamental proclivity is not to unequivocally suggest that categories are always to the benefit of humanity. How many historical atrocities have been committed in the name of categories is not known; even in the wake of those we mourn, the holocaust of European Jews in the Twentieth Century for example, we seem to deny, forget, or rationalize away the lessons. Rwanda. Darfur. Mathew Shepard.
“Privacy is dead, get over it” is so hackneyed a phrase and retracted by its original author, but it still pertains. If it is, then let me say, I am not over it. If it is not, and I am of this belief, let’s come to understand in what forms it remains meaningful and how we should strive to protect it technologically, socially, legally and within a market economy. I don’t want to be profiled. I don’t want to be defined by marketers whose only interest in me is monetary. I don’t want to spend precious time and energy fighting the imposition of their information cage. Yes, that’s its. Max Weber spoke at the turn of the last century about the “iron cage” falling down on “man” in the industrial age. Make no mistake. An information cage is descending on us in this age.