Tracy Mitrano's blog

Internet Security: A Sisyphean Task?

As the Director of IT Policy at Cornell University for over twelve years I estimate that I spent the majority of my time working on security-related issues.  I began work, inauspiciously, on April 1, 2001.  A month later, I remember going over to the Law School with the Security Coordinator to talk about a breach that occurred with servers of the Legal Information Institute.   Hackers with Internet Protocol addresses that resolved to the People’s Republic of China had broken into the database and were removing tremendous volumes of material.  I inherited an acceptable use policy that had some references to security, for example the prohibition against sharing passwords, so my supervisor, Vice President Polley McClure, set me on the task of writing a separate security policy.  If common criminals and nation-state threats weren’t enough to keep us busy, script kiddies had just become popular.  Teen-age vandals who traded spray paint for computer programs proliferated illegal scans, computer worms and various other forms of malware at a rate that was blowing the roof off of our technical security analytics.

Under Polley’s watch, Cornell transitioned its “security coordinator” into one of the first full-fledged directors of IT security in the country.  That role, unlike its predecessor, had university-wide responsibilities.  It also began to acquire a number of F.T.Es who operated as security engineers to do scans, analysis, forensics, integrate technical controls into network, operations and information services as well as work with me on policy, help educate leadership at the university, stakeholders and the community about the need for improved security practices.  I do not know of a college or university that did not follow suite one way or another.  Simultaneously, Mark Luker, Vice President of Policy at EDUCAUSE in those years, put Steve Worona and Rodney Peterson on the creation of the EDUCAUSE Security Professionals’ Conference, the Internet2/EDUCAUSE Security Task Force and the inauguration of the Security List Service.  Most directors of security report to the CIO and many now join their CIOs at the institution’s risk management big table.   Altogether, counting tools and FTE and collateral assistance from offices of audit, counsel, and risk management, the total costs represent big money.

And yet the breaches keep coming.  Sometimes they derive from residual lapses in human error.  The spreadsheet with personally identifiable information from ten years ago that sits on an unprotected computer, or even worse, PII that gets posted on a web site because someone hit the wrong button.  Of increased attention since last summer are persistent nation-state threats.  While among seasoned players in the security arena this type is not new  but it has captured media attention for two principal reasons.  First, mad as a hatter, the New York Times lashed out at China in particular this kind of activity when it learned that the PRC military operations were allegedly responsible for hacking into the mail accounts of reporters whose beat was China.  Then came Edward Snowden and his disclosures.  Trickling out since June of last year, every time I think we have reached the bottom of the barrel, like Jason in Halloween, another one pops up to blow our collective mind.  Or at least it has blown mine.

The details of how and in what ways the United States has been operating offensive Internet security maneuvers should not surprise me. After all, I am a historian. Moreover, my father’s brother was in the Office of Strategic Services during the Second World War and then a member of the Central Intelligence Agency.  Revered among our family for his heroism as a saboteur jumping behind German lines and training spies in the U.K., my “Uncle Bill,” (nee Anthony Mitrano) made legendary stories of spies and saboteurs.  After 9/11 did I really think that the United States was an innocent player in the world of cyber warfare?   After Israel’s cyber attack on Iran, was it possible to imagine that our country was not involved?  And with my own introduction into this world of security incidents in 2001, how naïve could I be not to imagine that at least some sector of our government: the C.I.A., the N.S.A, or the military, were not also in the thick of these kinds of activities.

The most recent Snowden disclosures about the N.S.A. spoofing Facebook or using cyber warfare to monitor the Chinese company Huawei do not surprise me.  But when I make the connection between my academic analytic self and someone who has worked in and around the Internet security area for years I cannot help to be deeply dismayed.  How can our colleges and universities be expected to compete with this well-funded, hotly motivated, opaque world?  More important: how can we maintain our missions that rely on privacy and security, integrity and confidentiality of information?  I have long argued that our security operations need to rise to the higher level of information management.  That opinion remains.  I have long thought that IT Officers and Offices should not be made the administrative scapegoat for these breaches.  All too often CIOs have all the responsibility and none of the authority to manage information.  Uninformed administrators from other units collapse a complex world into one word, one solution: security.   Security incidents affect all aspects of our institutions: responsibility for managing them should be shared among units and constituencies.

But the most recent Snowden disclosures have given me pause.  Under the circumstances of a surreptitious undeclared cyber war, is network security a Sisyphean Task for colleges and universities?  Our sector has been severely criticized for our technical lapses, held to account so many demoralizing times, and spent so much money that we and our students cannot not afford to compete in a game that we cannot win.  At least this way, we can’t.  Not without knowing the full measure of what was and continues to go around us among and between nation states.  If there was ever a reason for transparency of these operations, it is the untenable position that our government has placed higher education.  Among essential democratic, citizenship principles, transparency would at least place this undeclared, and out-of-control, cyber warfare into a context we could at least begin to understand if not provide guidance, direction, and hope to address reasonably.

Show on Jobs site: 

Back in the ICPL Saddle!

Internet Culture, Policy and Law, formerly known as the Institute for Computer Policy and Law, and in its 18th consecutive year is up and running for 2014. The conference runs from noon on September 17th, all day on the 18th, and until noon on the 19th.

ICPL features outstanding speakers and topics this year. Hal Abelson, Professor of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, and a principal in the MIT Report on the Aaron Schwartz case will open the program with a talk on that topic.  His co-author, Harry Lewis, Professor of Computer Science, Harvard University, on the well-received book, Blown to Bits, will follow; many readers know Professor Lewis as an author of wide-ranging interests to higher education, including his work, Excellence Without a Soul, and as the former Dean of Harvard College.

Wednesday opens with ICPL charter member and NACUA fellow, Steve McDonald, General Counsel at the Rhode Island School of Design, who bring us up to date on copyright.  Steve Wicker, Professor of Electrical and Computer Engineering, at Cornell, and author of Cellular Convergence and the Death of Privacy – and instructor in one of Cornell’s first MOOCs – will speak on the subject of his book in the session following.  After lunch, Jon Kleinberg, Professor of Computer and Information Sciences, Cornell, and among his many accomplishments a McArthur Fellow (“genius award"), will speak on his most recent research about networks, crowds and markets.

Finally, on Wednesday, Urs Gasser, Professor of Practice, Harvard Law, and Executive Director at the Berkman Center for Internet & Society, Harvard University, will speak on International Internet Governance, always a hot topic, and with the announcement that ICANN will transform into something new in 2015 heating up quickly.  In keeping with our international interests, Fredrik Logevall, Vice Provost for International Affairs and Director of Einaudi Center, at Cornell with speak on international education.

Friday morning opens with a discussion entitled “From MOOCs to Cybercollege” with Cornell University Asssociate Librarian Oya Reiger, who has been part of the MOOC effort, and Thomas Howe of Southwestern University who has pioneered an example of “cybercollege,” or international, inter-institutional courses, as the Coordinator General, Fondazione Restoring Ancient Stabiae and Professor of Art and Art History at Southwestern University.  We will close with a discussion of the new Information Literacy standards, featuring Kornelia Tancheva, Director of Olin and Uris Libraries at Cornell University.

This line up of speakers and topics should electrify anyone in the academy interested in the principal issues driving both the Internet and higher education today.  It is an honor to direct this program, and I very much look forward to seeing old friends and meeting new ones in Ithaca this September!

Show on Jobs site: 

Really? You Talked About Security?

Awash with insightful criticism about his call to President Obama regarding privacy, Mark Zuckerberg has had his communications people release a statement that the conversation was about security not privacy.  Does this etymological shell game impress you?

Back to basics, Mr. Zuckerberg. “Security” in this context is a subset of privacy, or the fair information practices that have long stood the test of time for foundational information management. Scholars may slice and dice the concepts, but essentially they break down into four practices:

Notice: that information about an individual is being kept by the entity and that the individual is notified if it is disclosed.  “Disclosed” includes data mining for anything more than technical operations; it most certainly includes the sale of data.  Has Facebook informed you of sale of your data to advertisers? No, it fails on point one.

Relevancy: that information about an individual maintained by the entity is relevant to its business operations. I suppose one could take Facebook’s terms of service and privacy policy and with some background in “big data” or “data science” and discern that it, and Google and just about everything else on the web, is somehow connected with an advertising revenue stream, but it would take some connecting of the dots that are otherwise not clearly stated.

Ability to Correct the Record:  most times this section is labeled “transparency” but overexposure and politicization of this word prompts me to go back to its original name and concept.  Quite simply, it is the ability of the individual to see the information held by the entity and correct any “mistakes.”  Originally, it derived from Fair Credit Reporting, in such instances as when names or identities were confused and, for example, a credit rating tanked because the entity had the wrong “John Smith.”

In this case, I will use my own situation.  After the fall semester of “Culture, Law and Politics of the Internet” 2008 passed, I wanted to keep in touch with my students, so I fired up a Facebook account.  Exhausted by the news feeds and in protest of the Facebook’s practice of switching privacy settings without notice to users (a practice that the F.T.C. later wrote into an agreement that Facebook could do it no longer), I closed my account.  (Sorry, students!  You can always email me!!!)  But it turns out that Facebook still has all the information once held about me.  They did not purge the account; they only blocked external access to it.  Yet I do not have the ability or right to correct that record.

What if there are postings there that put me in a false light?  Are defamatory?  I would have to bring a full-fledged legal case to correct.  That is not feasible.  Nor, under these circumstances, is Facebook’s practice consistent with the “ability to correct the record” principle.

Security: Ah, now we come to the heart of the matter.  Security, meaning the administrative, logical and physical rules and technical safeguards to keep the information confidential, with appropriate access controls and with full integrity, is one of the foundational privacy practices.  But it in and of itself is not, does not equal, and should not be thought of as synonymous with privacy.

Taking Facebook’s communication’s team at their word, I imagine Mr. Zuckerberg, a brilliant coder to be sure, taking the President’s time to talk about safeguards?  I doubt it.  That explanation speaks to damage control, camouflage and disingenuousness.  How ignorant does he think people are about these issues?  Billions of dollars in corporate valuation, not to mention his own bank account, suggests he might not be entirely wrong on that one.  But the times may be a-changing … and the pushback over this little brouhaha is an indication that there may come a day when it might be otherwise. 

Show on Jobs site: 

Internet Ironies

This week was rich with Internet privacy ironies. First, we had Edward Snowden and Eric Schmidt at South by Southwest, each talking about privacy, and then Mark Zuckerberg phoning Obama, who was recently profiled in The New Yorker, to complain about the N.S.A.

Edward Snowden apparently has shifted his gaze to consumer privacy. Then Eric Schmidt gets up on stage and prognosticates that in the future consumers can buy privacy … back. Well, he didn’t say the “back” part, because of course he is not going to announce that Google as principal Internet Titan, and an advertising company to boot, has almost single-handedly been responsible for denuding consumers of it in the first place. How does it work? Data mining of everything from searches to data stores (email, blogs, etc.) and then selling the information for targeted advertising. Search vitamins, and you will suddenly see ads for vitamins and supplements on your web pages.

Wait, wait, there’s more. Let’s say you are on a store web site, buying camping equipment. You put a couple of things in your shopping cart. Your neighbor suddenly drops in for a cup of coffee, and 15, 20 minutes goes by without activity on the transactional page. Your phone rings.  It is the store asking if you need any help in completing your order.  But you have never done business with them before. How do they even know your phone number? Could be a thousand ways: a Google, for example, could have sold your contact information to a data store such as Axiom, which has a full-fledged profile on you, which they sell to companies such as the camping store for use in exactly this kind of moment. And now Eric Schmidt tells us that there may be a market to buy my privacy back.  Information about me, my life, my preferences, shopping and travel habits, income and assets, number of children, who my neighbors and friends and work associates are, and heaven only knows what else. The manner in which they identify patterns based on millions of people and sophisticated software, in some behavioral ways these companies may know more about me than I do. Privacy as a luxury, that Google will sell me. That is rich.

Then Mark Zuckerberg called President Obama to complain about another Snowden disclosure. The New York Times did not report President Obama’s response, but read the recent David Resnick profile of Obama in the New Yorker. Obama claims that the N.S.A. activity is legal, but in whose opinion? “Trust me” approval of a secret court is a tautology.  

Let’s pull out the threads of this idea. Say that the D.O.J., on behalf of the N.S.A., goes to the FISA Court with a warrant alleging persons associated with terrorism and a  request for content monitoring of their Internet activities.  The Court will review the request, first, to see if the evidence meets the standard of a person significant to a terrorist investigation, and second, for probable cause of criminal activity.  Assuming that the application meets both of these standards, they grant approval. Note at this juncture, because of the closed nature of these proceedings, the public does not know: the quality of the evidence, because, after all, the FISA Court is not only secret, but ex parte, meaning that there is no opposing counsel. Nor do we know the extent of these investigations, their scope, or how long they last. What is done with the information about these persons?  Do these investigations include U.S. persons (either citizens or resident aliens – a legal term, by the way) either directly as a target or individuals as caught up in the sweep, a few degrees of separation away?

How much do the justices know or understand about the technological processes underlying these searches? This is a fascinating question even for regular, criminal, Title III Courts; the secrecy surrounding the FISA Court makes the question all the more intriguing. Apparently, the N.S.A. spoofs Facebook login pages and the plants malicious code on a target’s computer. Do the justices have a fundamental understanding of network and computer technology, together with the nature and functionality of code installed surreptitiously for nefarious purposes?   Do they understand the depth of these deceptions and the quality of these intrusions? Is this the same technology used in advanced persistent threats against our colleges and universities?  Must an empire operate with principles that compete with a republic?  Does national security trumps civil liberties? Who wins in that game, the public or the terrorists?

If these questions were not sufficiently overwhelming to baffle citizens, then leave it to Mark Zuckerberg to dress down President Obama over his administration’s surreptitious activities.  Take a look at the comments section associated with the New York Times article. Most of them get the irony. An Internet Titan striking the pose of upset citizen. Mr. Zuckerberg is upset that the government is messing with the trust concepts that undergird his multi-billion dollar company. Mark Zuckerberg not only knows you better than you know yourself, he knows what you will want in the future! Since you have given up information about yourself to him anyway, he figures that you want him to define your autonomy. And then, just when he had us all snookered, the N.S.A. steps in to ruin his brain-child.

Isn’t that rich?

Show on Jobs site: 

Goldilocks and Informed Consent

When you authenticate to a service, do you know what information about you is being communicated between the login page and the service provider? Are there distinctions among services, for example, Facebook or Google or Yahoo?  Does it make any difference whether the service is for consumers, or, in higher education, under an enterprise contract?  Is there a bridging authentication service, for example InCommon?  If so, what difference does that make in terms of the release of “attributes,” or pieces of information, about your identity?  If looked at by a human, would those attributes identify you as a distinct individual?  Or are they unidentifiable parts for a human reader, but easily mined and recombined in ways that even pedestrian software programs can (re)create your identity?  What do service providers do with that information?

A great deal. Information about you is a valuable commodity in our information economy. Illegally obtained, for example by a data breach, your personally identifiable information is sold on the black market to identity thieves who use it to commit fraud.  Legitimate data warehouses give criminals a run for their money, however.  They don’t need to obtain it illegally or take risks dealing on a black market.  And they don’t need to commit fraud in order to make money.  The U.S. marketplace, which to date has insured lax laws around the collection of personally identifiable information, opens the door to data collection, recombination in the form of extensive profiles on individuals, and then sale of that information and/or profiles to hoards of hungry buyers who use it for advertising, marketing and risk analysis for mortgages and loans to individuals.  The media has paid some attention to targeted advertising and consumers are increasingly aware that the pop-ups on their pages bear a striking resemblance to the terms of previous searches. But it is March 4, 2014.  Do you know where your data are?

Most people do not, and that is why I am beginning a deeper dive into this question with a focus on authentication.  In a document prepared by the company Cirrus Identity, Inc. on attribute release, a survey of what attributes Internet Titans such as Google, Facebook, and Twitter release in the authentication process reveals interesting variation among the companies relative to their Internet presence and core business models.  Overall observations: “There is no standard format across the providers in the case of the opaque unique ID, and email is not treated consistently across the providers either.”  The example the document gives is to contrast LinkedIn, for which email is only released if the user has set their profile to public, whereas WindowsLive gives up a number of attributes automatically.  The document states that most Internet companies pass a human readable unique ID, mostly email addresses, or, for Twitter, @username, rather than an opaque ID.  And finally, if I am reading this document correctly, among all of the companies studied, Google releases not only the Gmail address but also a link to the user’s Google profile.  (For more information on this survey, this link is a good start.)

What this means in English is that users first of all have no control in the attributes that Internet Titans release in authentication process.  Second, it would appear that each Titan makes attribute release decisions that reflect their brand, for example the Twitter use of @username, which is a distinctive “handle” of their company.  Third, attribute release bears some relationship to the business model of the company.  Google, for example, still primarily an advertising company, craves eyes on its pages to demonstrate its company’s value to advertisers.  Therefore, the release of not only gmail addresses but also profile links encourages more hits by offering the information freely.

Is this document a great smoking gun? No, most people would shrug to learn that service providers offer attributes such as the user’s email address in authentication processes. It is important to remember that data stores use incremental pieces of information in their larger sweep of information, and sophisticated software to recombine into detailed profiles about individual consumers purchasing habits, income, marital status, sexual preferences, number of children or dependents, travel patterns, etc. And so each piece needs to be understood as a part of this larger whole.  If we cannot control information about us that comes in pieces, we have no ability to control the whole profile, and its implications on privacy and personal autonomy that these whole profiles, bought and sold without our knowledge, have on our lives, choices and potentialities as individual persons.

But there are other reasons why I begin my deeper dive into the intersection of technology and law with authentication. First, it is critical that we open the proverbial kimono on technology processes of which most people are unaware. The point is that if the public is going to be able to discuss intelligently privacy in the information age, it must learn something about these basic processes. Second, whether a service provider releases a lot of attributes or none, human reader or opaque, the main point is that it is a decision that the company makes unilaterally. Depending on the business model, the company can decide to release more, i.e. Google, or none at all if the user chooses to restrict his or her profile, i.e. LinkedIn.  Note how technological practices that implicate privacy are woven into reputation and the service provided. Third, and most important, companies offer no informed consent to the user.

Informed consent bridges the flexibility that Internet companies need to be nimble in building their business model, i.e. unilateral decisions that they make regarding what attributes to release to identity providers in authentication given how those decisions map to business models, and the information that consumers require to make a choice about whether or not to accept those terms as the “cost” (not the only one, by the way, but one of the ones implicating privacy) of the service. No U.S. law requires that service providers offer either the information or consent, but in my opinion they should. Informed consent is consistent with the privacy laws and practices of all other developed countries except the United States.

As our colleges and universities become more “international,” it would seem to me not as a matter of compliance per se but recognition that the United States institutions want to treat the institutions in other countries as equal partners, we should adapt to those laws.  But compliance and partnership are part of the superstructure of reasons why implied consent matters. The principal reason is that it is truly the right thing to do. I understand that businesses emerging in this dynamic market require flexibility; I think it would inhibit the innovative relationship that exists between technology and the market for the law to require one, single practice.  But to allow Internet Titans flexibility and to leave consumers in the dark about the implications of those practices are separable issues. Rather than swinging the privacy pendulum all the way over to restricting businesses to a single practice, its seems eminently fair to simply offer consumers the information that they need in real time – and not in small print terms of use or privacy policies – so that they can make the decision to consent, or not, to the practice.  Informed consent is to consumer privacy practices what Goldilocks is to porridge!  


Show on Jobs site: 

What's Current in Privacy?

The last two blogs were about what should be done. This one is about some progressive initiatives. In terms of national policy, the Snowden disclosures have re-opened an important conversation about electronic surveillance laws.  We are all in charge of keeping that conversation going to the very least conclusion of updating privacy laws such as the Family Education Rights Privacy Act, Computer Fraud and Abuse Act of 1986; the Electronic Communications Privacy Act also of 1986; the U.S.A.-Patriot Act of 2001 and the Foreign Intelligence Surveillance Act, originally of 1978, updated in 2008, but evidently in need of further revision to balance civil rights and national security.    Under the heading of consumer privacy, President Obama commissioned a study a couple of years ago that compiled a consumer Bill of Rights.  Recently, he pushed the issue further by asking John Podesta to explore more carefully “big data” consumer issues. One need go no further than Dan Solove’s groundbreaking book Digital Person published in 2006 on this subject.  Oh, and while one is reading through Dan’s oeuvre, take a look at a new law review article he has published recently, “The F.T.C. and the New Common Law of Privacy.” The title speaks for its thesis, but read the article for details.  As always, Dan is right on the mark of the most contemporary developments in this area.

Privacy issues in education have been focused on K-12.  Naturally, we care about our children, given their vulnerability, and as parents and teachers, our responsibility to watch over their physical safety, emotional and intellectual development. The Child On-Line Privacy Protection Act also offers the public a legal hook to hang concerns, of late especially, about how most common applications ignore the basic rule that use by a person under 13 requires parental permission.  This law joins the list of those in need of revision, if for no other reason that technology has outstripped its applicable effectiveness. Joel Reindenberg’s work out of Fordham’s Law School on Public School use of cloud services breaks the ground in the enterprise area of these concerns and is a warning bell in the night for higher education as well.   An overview of that study has links to press reports as well as the research work.

SafeGov has for a few years now become a resource for research, information and thought leadership in this broad area of education and privacy.  There is much to be found on their entire site, but allow me to draw attention to a recent, important addition in the vein of Joel’s research: the launching of their portal aptly named: “Students Are Not Products.”  Parents will want to look into for their children, but there is no reason for higher education to believe itself above the message, especially not with the Google gmail litigation as an on-going issue.  Here is today’s report on it, but be sure to update your search every day because it is a moving target (and grist for a future blog J).

EDUCAUSE’s Higher Education Privacy Officer Working Group has been putting out some excellent blogs, resources and other materials in this area in observance of Data Privacy Month. Take a peek. And do not fail to notice the over 50 colleges and universities that are actively engaging in speakers and other events recognizing this past month’s theme.  Only about 3,950 to go!  Is your institution on the list?

Ken Klingenstein, long a leader in the praxis of pioneering ideas and technologies (think: InCommon and Shibboleth, together with the late and beloved R.L. Bob Morgan) for higher education is the principal investigator for Internet2’s National Strategies for Trusted Identities in Cyberspace (“NSTIC”) grant.  Ken and R.L. Bob developed federated identity with a long-term vision to preserve privacy; Ken’s work with NSTIC continues on the path to fulfill that visions.   Among the projects under this grant is Lifestyles of the Attribute Rich and Privacy Preserved (or “LARPP, and disclosure: I am on payroll of this project as its Community Engagement Facilitator).  Only Ken could come up with such a creative name, but only Ken could lead the charge to have computer scientists come up with “Privacy Manager” software that gives real, technical meaning to informed consent on the part of the user in the release of attributes relative to personally identifiable information in authentication.  A pilot group of 12 institutions will implement this software and socialize it around campuses before a broader release in higher education.  Watch for news about its development and wait for it to come to a school near you. Institutions that tout themselves as “international” should be especially eager to adopt it as a means of harmonizing the privacy practices of developed countries and the strange, sectoral patchwork of U.S. privacy laws and safe harbors.

Sensing that you need to know more about the significance of privacy in higher education?  Again, disclosure, but I could not imagine a better one-stop shop that to attend the Higher Education Privacy Forum that Dan and I are hosting at George Washington Law School on May 8, 2014.  When you review the schedule, do not miss the speakers: Fred Schneider, Computer Science, Cornell University and internationally recognize expert on network security … because he has always incorporate policy in general and privacy in particular into his research.  Harry Lewis is also a CS professor, from Harvard and has written a book, Blown to Bits, that was prescient and therefore remains relevant.   He might also tell us more about the resolution of the email controversy at the Big H.  Dan will talk about NSA and the Snowden disclosures.  And finally, up and coming as a speaker and most definitely as an issue is the privacy and security of research data and computing.  Bill Barnett.  By the way, the event is FREE!  No registration expense, just get there or be square!  More information here.


Show on Jobs site: 

Addendum on Privacy

As I read last week about data breach at the University of Maryland and now about Indiana, what can be said about the relationship between privacy and security?  I left the conclusion out of yesterday’s post. Consider this one an addendum.

Information management is what must be said, and done.  It is the bridge between human practices and technical safeguards.  It is what connects “information” and “technology.”  It is the higher business order of these two necessary components, privacy and security, to the comprehensive institutional goals of effective and efficient governance, compliance and risk management.  It is the whole that is greater than the sum of parts such as, on the one hand: data inventory and harmonization of practices that for sensitive information including but not limited to personally identifiable data that instantiate notice, transparency, relevancy and administrative, logical and physical security; and on the other, classifications of data tied to appropriate technological safeguards, rules regarding authentication and authorization, device maintenance and hygiene, and optimized network operations.

Comprehensive information management takes into account different types of security breaches: human error in categorization of data or configuration of devices; criminal fraud; persistent nation-state threats, as a whole, not in parts. Functionally, in terms of roles and responsibilities, information management generates common purpose among those people and offices that touch it in various capacities: legal counsel, risk management, compliance, privacy and security subject matter officers, information technology personal (across the board, not simply “security” and “policy” officers), academic and administrative unit heads, data stewards and custodians.  

Conceptually, it makes sense of the diverse “data sets” that deceptively appear in our imaginations to exist separate and apart from each other, for example, “H.R.,” or “student” or “financial data,” or raising the perspective level up a notch, “research” and “administrative” data.  In fact, these types of data often flow together through centralized enterprise administrative systems.   The greatest risk colleges and universities countenance in “security” or “privacy,” surely must be our conceptual distinctions that do not play out in the reality of our technical systems and functions.  Higher education’s mismanagement of information can often be traced back to this gap.  Lack of awareness of how this gap continues to plague best, and I really mean best, practices is the root problem.  Moreover, it is one rooted in this on-going, made-up tension between “privacy” and “security,” and the inversion of one for other in terms of means and ends. 

No one single practice name will make a system perfect.  With all the efforts in the world, the breaches that occurred at Maryland or Indiana or hundreds of other colleges and universities around the country might still occur.  Well-intentioned people are too flawed, even the best software can malfunction given multiple mechanized and human dependencies, and ill-intentioned people often have the emotional drive that gives them the edge in this multi-faceted competitive game.  Nevertheless, for the sake of base line compliance, for the reputational goal of effectiveness and efficiency, for the ideological one of supporting our missions, we must strive for greater degrees of mastery over our intellectual property, institutional information, research data, faculty and student work product. 

Vocabulary matters.  It matters because it is a reflection of our thinking, and how our thinking gets translated into policies, procedures, and practices.  If ten years ago or more, our challenge was all about “security,” and then in the last five years or so we have shed more light on “privacy,” may we now elevate our vocabulary to manage well our information. 

Show on Jobs site: 

Privacy in Higher Education

Last week I wrote about the weak points in higher education regarding advances in promoting privacy and information management. Today I would like to suggest three objectives every institution should strive to achieve to build awareness and promote  good policies and best practices.

1. Reform the Family Rights Education Privacy Act (FERPA)

Congress passed FERPA in 1974.  It is one of the first federal public privacy laws.  Its age shows by comparison to more contemporary ones.  It has no specific technical security safeguards, for example, and it famously has cost higher education much anxiety and yet not a dime.

To be sure, reputational consequences aid in making most registrars guard transcripts as if those documents were what’s inside Fort Knox.  But when it comes to guiding faculty, for example, in the appropriate use of enterprise services with contractual FERPA protections, the lack of an individual right of action or meaningful administrative damages means that faculty for the most part are unmoved by institutional policies or guidelines.  Research in the K-12 area, and experience in higher education, suggests that wholesale violations exist.  College and universities should deploy their associations and government relations people to work with Congress to revise this law and make its practices consistent with the twenty-first century.

2. Designate or Create a Chief Privacy Officer

Each campus should have a Chief Privacy Officer (CPO).  With all of the attention on how administrations should be “run like a business,” the resistance to the appointment of a CPO suggests either ignorance, which education can correct, or the influence of vested interests (registrars, institutional counsel, IT, risk management, etc.) that perceive the role as a threat to their proverbial “turf.”  The latter resistance requires leadership at the highest levels to appreciate the significance of a CPO and to call the shot to appoint one.

What does this role do exactly?  First, they grasp all of the international, federal and state law related to privacy – no small task!  They grasp those laws and regulations sufficiently to facilitate bottom-line compliance practices.  Examples range from how to manage the personally identifiable information of all constituencies on campus; assisting privacy and security officers of specific subject matter areas with compliance (registrars, GLBA, HITECH, HIPAA officers); making privacy practices consistent across different departments (to avoid breaches associated with shadow systems especially) and data sets (for example administrative and research data). In short, they facilitate the alignment of “information” and “technology.”  In a large, distributed research university, this role should be well integrated with registrars, associated deans and vice presidents, university counsel, audit and IT (information systems, not just security and policy).

In addition to these responsibilities, a CPO should also be assisting the institution with higher-level privacy issues, such as the sale of email addresses, etc. to third parties (not an inconsiderable issue in athletics or bookstores, for example); institutional response to national policy issues related to electronic surveillance (also not an inconsiderable issue!); assisting in the development of institutional policies that involve privacy, such as network monitoring and disclosure of electronic media; and aligning technology and law in the practice of comprehensive privacy laws internationally.   Any institution that claims to be “global” or  “international” should be a leader in this area especially.

3. Create an Institutional Privacy Plan Uniting Academic and Administrative Players

In previous blogs the case has been made that privacy is a complicated but necessary-to-deal-with issue for higher education.  That being said, senior management should set the expectations among both the administrative and academic side of the house that this issue be raised to a level of institutional awareness and with the subject matter expertise available on campus or in collaboration with national associations and other campuses. 

Administrative and faculty expertise should work together to discuss the intersection of their respective work, research and best practices for the institution.  Where an institution recognizes a gap, collaborate with other institutions or engage with national associations developing this expertise in this area.  Take a page out of the book of accessibility advocates and have presidents issue something akin to a “President’s Accessibility Plan” but insert the term “privacy” to build momentum throughout campus.  More than any other issue I can think of, this one requires lots of input from all areas of campus to understand its complex dimensions, help educate the academic community, and act in higher education’s best interests.  It is also an invitation to international partners.  The United States stands out “like a sore thumb” in the international privacy community for it “sector” privacy laws.  Here is a very important and practical example of how we can learn a great deal from international partners.

These three recommendations should be enough to get a campus started.   Later this week I will report on some concrete initiatives.  

Show on Jobs site: 

Straight Talk on Privacy and Higher Education

Privacy has become “all the go” in conversations about governance, compliance and risk in higher education. That development is a positive one. Experience in this area makes me a little weary, however, of just how deeply the concept is sinking into change management processes.  So far the attention brought to this issue, as expressed in publicity campaigns such as “Data Privacy Month,” seems a little too easy, somewhat glossy to me.  Here, then, is some straight talk to get a deeper conversation going within our colleges and universities.

Because I grew up knowing that I was bisexual before it was fashionable – or even had a name in my culture of origin -- I prized privacy for self-protection. The subject therefore interested me in law school.  Because I was a relatively early player with a law degree in information technology, I recognized from the start that privacy played a key role in every aspect of the development of the Internet and the implementation of technologies in our colleges and universities.  When President Bush signed the USA-Patriot Act in October of 2001, only six months after I began work as the Director of IT Policy at Cornell, Brian Hawkins and Polley McClure tapped me to address the impact the legislation had on the intersection of law and technology in higher education.  Privacy jumped out at me as the target issue at the core of balancing national security and civil liberties.  While true for U.S. society in general, the issue was especially poignant for higher education because privacy undergirds the true efficacy of our missions.   It is not possible to teach, do research or conduct meaningful outreach without intellectual autonomy.  Intellectual autonomy requires privacy.

In my work as an administrator, I encountered many obstacles in making the case for “privacy.”  The first obstacle is confusion about the concept of privacy and its many legal and cultural meanings.  In previous blogs, I have attempted to break the legal concept down into five areas.  This categorization enhancing understanding of the kind of privacy intended in the context of liability, compliance and information management.  That understanding requires knowing at least the basics about fair information practices (notice, relevancy, transparency and security).  Next, we need to know something about public privacy laws such as Family Education Rights Privacy Act, Financial Services and Health Insurance Portability and Accountability Acts, to name the most obvious ones.  These laws are important because of the immediacy of the first to our bread and butter, and the last two because they have separate “security” and “privacy” provisions.  With that information under our belt, we can move on to other areas of privacy that have meaning in institutional administration, for example regulatory issues such as F.T.C. (privacy policies for transactional web sites, not a minor issue given the role of athletics, on our campuses) or civil torts such as defamation (which can be brought via the concept of respondent superior against an institution).   There is another maddening sub-issue worth mentioning.  Some institutional counsel at private institutions think that any use of discussion of the term “privacy” will somehow compromise the rights and privileges belonging to a private corporation.  In a word, that is baloney.  If someone makes that claim, think to yourself, “what perceived territory are they protecting?” and move on.  There is too much work to do to get caught up in such foolish semantics.

The second obstacle to serious conversations about privacy is how “security” concerns have overwhelmed consideration of “privacy.”  This observation was especially evident in the last decade or so when “security incidents,” i.e. the failure of technical safeguards to protect both devices and the information, exponentially proliferated with script kiddies, criminals seeking personally identifiable information for the purpose of committing fraud, and nation-state threats pounding on our networks.  Data breach notification laws caused university officials such as counsel, audit, risk management and public relations to ask CIOs to address the problem. CIOs looked to technical security specialists, i.e. directors of security and security engineers, to manage the administrative, logical and physical security lapses of these breaches.  In other words, “security” overwhelmed an understanding that at its root is one of privacy.  Consequently, a curious inversion occurred as a result: one of the four principles of fair information practices, “security,” became in the minds of most institutional administrators the ends of what needed to be done rather than the means of a higher goal, “privacy.”  This inversion still plagues our understanding of comprehensive management of information in colleges and universities today.   In fact, I will go so far as to say that until we unravel that inversion, we will never achieve effective governance, compliance and risk management in this area.

The third obstacle is in many ways the most interesting because it is about people, not technology.  Vested interests get in the way of change.  This axiom applies to any administrators who are unwilling to recognize and or work toward raising the level of how do to business that involves “information” and “technology.”  So happens that is precisely how higher education or any other corporation does business in the twenty-first century, but you wouldn’t know it talking to many registrars, counsel, CIOs and other IT managers and professionals, data stewards or variety of vice presidents or deans who refuse out of ignorance or fear to listen, and who perceive change as capitulation of their authority.  Often this intransigence filters down to their lieutenants, such as data steward delegates or college business officers, each group loaded for bear before anyone steps into a meeting to discuss policies or processes designed to align information and technology practices to meet compliance and business needs. 

The combination of these three major obstacles confounds appropriate governance, complicates compliance and elevates risk.  There are fixes.  We need more education about the meaning of information privacy in our institutions.  An already accepted term in the corporate arena, higher education lags tremendously.  Senior administrators should find champions who will make information management the higher order of how to do business with technology, and, in turn, flip privacy back to the ends of why and how we make changes, rendering technical security the means.  That flip will result in more concrete and appropriate fixes because it mirrors the law as well as the practices in the global corporate community.  Finally, those same senior level administrators must set the proper tone within their administration.  The message should be that change in this area is necessary.  For the sake of the institution, personal intransigence must give way.  In short, the players need to work together in a manner that is honest, collaborative and transparent … hmmm, a reiteration of some of the very values that privacy stands for in our society.

Not until higher education recognizes the obstacles that are in the way of necessary change will we begin the work to run our institutions in an efficient and effective manner becoming of the twenty-first century.  In an era of intense competition among our institutions about which ones will be the winners and losers in ten, twenty or fifty years, I put my chip on this idea: those that embrace this change will not lose, no matter what their Carnegie Classification.  For with this change comes the awareness of what privacy means for societies as well as for individuals. Higher education could speak to the larger goals, unlike the corporate sector that manages it for compliance purposes alone.  That is why I shared with you my personal reasons why privacy became important to me.  If, between the USA-Patriot Act and the Snowden disclosures, these events have taught us anything, it should be that privacy matters to everyone, it touches all dimensions of our lives, and, not least, it has a very special place and resonance in higher education.

Show on Jobs site: 
Smart Title: 
Straight Talk on Privacy <br>and Higher Ed

National Privacy Month

January 28-February 28 is National Privacy Month.  I don’t know who coined the phrase or anointed the dates (whoever does anyway?) but let’s take full advantage to explore some developments.

We start out with today’s NYT piece about what’s behind the LED lights at Newark International Airport.

I was just there yesterday, so, for the sake of discussion, let’s assume they were deployed everywhere (not just in Terminal B). First, they saw my bedraggled self coming off of an international flight amazed at the three uniformed police, one armed with a dog, meeting passengers at the door of the jet way.  Me thinks they targeted someone for smuggling since the usual operating procedure is to walk doggie around baggage claim sniffing for contraband food and drugs.  There they might catch someone sheepishly moving something around in his or her belongings before going through customs.  Or the occasional bugler who grabs someone’s bag “accidently on purpose.”   And, I hope, the intelligent terrorist who so far has gone through multiple layers of security to be in the inner workings of Passport Control and U.S. Immigrations.  Are you offended by this kind of surveillance?  I am not, as a rule, so long as there are laws and policies to keep the surveillance cabined to crime.  And that is the problem.  Those laws don’t exist.   There is no legal imperative for private or municipal entities to have policies either.  So now let’s think about the consequences of that lapse.

Let’s pretend I am having an affair with someone abroad, and that I am lying to spouse about my coming and goings. Hypothetical Hubby is suspicious for any number of reasons but especially of late because he runs into unexplainable things.  A Turkish lira and a fancy scarf he does not recognize.  Then there is the half-completed, crumpled Passport Control form he found stuffed in the bathroom trash bin.  And why do I keep taking business trips allegedly to Dallas or Louisville with little to say about those cities upon my return?

Hubby is no fool, and besides, he has connections. He went to high school with a guy who works for security at Liberty International, the manager of international travel terminals. Over drinks with a few of the buddies, he says jokingly, how about using this facial recognition software I have and a photograph of my wife to see if she is in your database?  A few days later the buddy calls back.  How’d you know?  She has been through Passport Control five times in the last six months!

Did the husband or manager break any laws? I can’t think of any.  Does Port Authority have policy on “administrative voyeurism?” I don’t know, but I do know that there is no law that requires such a policy.

These two questions are the crux of the policy problem that plagues personal privacy.  It is not corruption alone that surprises.  We know people are imperfect.  That is why we have laws and policies to govern that kind of behavior.   What surprises is that our society lacks laws governing this particular area.   What corruption can do in the gap that exists between the advances of technologies that impinge on personal privacy and the absence of cabining law or policy is limited only by our imagination.

With history casting a long shadow on this kind of behavior, it is a chilling guide, hardly limited to marital infidelity. Recent disclosures about the closing of lanes that snarled traffic and imperiled emergency vehicles offer a glimpse of petty corruption rife amongst us; just think what could happened if elevated to levels of inhuman disgrace witnessed throughout the harrowing events of the twentieth-century.  No wonder Fred Cate, among the very best thinkers in our country today on questions of privacy and social policy, and who is quoted in the NYT article, considers unregulated use of these new surveillance technologies, “terrifying.”

Show on Jobs site: 


Subscribe to RSS - Tracy Mitrano's blog
Back to Top