I just wrote a memo to a group of budget people explaining (again) why it takes library staff with good technical skills, time, and lots of patience to make sure that when you click on a button in a library database to find an article, you actually find the article. Since it’s all online, now, it’s much less work, right?
Well . . . no. In the past it was fairly simple. We subscribed to journals, cataloged them, changed the catalog record when the title changed, changed it back when the editors decided they liked the old title after all, checked in issues, and sent them off to be bound once a year. Not so difficult, but it obviously took real people to move those real things around.
Now it’s almost all online, but since each library subscribes to different things, we have to tell each database which journals are available to our community. Since one database may have a citation to an article that is full text in a different database, we also have to make them talk to each other. And we have to introduce the databases to users who are not on campus through a proxy server that verifies that an incoming request is from a community member covered under our license. Many libraries add yet another layer on top of all that so that the user can do one search that goes out to multiple databases, including the library’s catalog. (Since those cost tens of thousands of dollars and even more staff time, we haven’t done that yet.)
But wait, there’s more! To make sure any one user can find their way to the content in the databases, we also use software to make a website and we create online guides so that anyone who comes knocking on our Internet door can find out what resources are available. And, of course, there’s the bigger picture: we have to figure out which of these journals and databases are the best match for our curriculum and within our budget, which means periodically pulling usage data, studying changes in programs (oh wow, that African Studies program proposal just passed!), deciding what to do when publishers jack up prices by thousands of dollars without warning, as they do, and regularly negotiate with our faculty to make sure we’re doing our best with the money we have. It looks so simple when you pull up a full-text article, but there’s a lot going on behind the screens.
This is one small example of the concept John Palfrey and Urs Gasser explore in their new book, Interop: The Promise and perils of Highly Interconnected Systems.
We need systems to work together because it makes life much easier. For example, at some point in the 19th century, people needed to agree on on a standard distance between train tracks so trains could travel across borders. Russia decided to use wider tracks (Vladimir Nabokov in his memoir called them “ample and lazy”) so that travelers from St Petersburg to Paris had to disembark and change trains at the border. Though inconvenient, it also meant potential invaders would be inconvenienced, too.
Palfrey and Gasser’s book looks at the benefits and risks of interconnectivity in order to lay out a way to both understand what’s involved and how to achieve the best level of healthy interconnectivity without inadvertently introducing too many risks. One of the reasons interconnectivity is difficult is that it’s not only a technical issue. It has at least four layers: the technology involved, the content or data being shared, the humans using the data, and an institutional layer of people deciding through laws, standards, cultural values, or business decisions how they want to interact and who's going to pay for it.
If you can’t get the article you want from your library, it could be any of a number of technical problems (your wifi connection was interrupted, or the link between the database where you started your search and the database that contains a copy of the article isn’t working properly). The data may be problematic (the database vendor had a dispute with the publisher of a journal, so the journal contents were pulled out but the link resolver doesn’t know that yet; or you’re off campus and the barcode on an ID card you replaced last month no longer matches the one in the library’s patron file). It could be human – you were sure you’d found that article in JSTOR last time you looked though actually it was sent to you by a friend, or you misremembered the article’s title. Or it could well be institutional. Perhaps the journal you want tripled in price, the library canceled the subscription, and you missed the memo, or the state subsidy that funded a database got axed last month, or Congress passed a law to turn off the Internet. Okay, that last one is unlikely, but you get the idea. There are all kinds of layers involved in interop.
Palfrey and Gasser provide interesting examples of why interop is beneficial – including the ways that people who didn’t know one another were able to use open source software to connect with each other and with people on the ground when Haiti was struck by a massive earthquake in 2010. In this case, it involved a non-profit technology company that had designed a suite of interactive software designed to work in crisis situations, a program at Tufts University, the United Nations, and an intriguing organization I’d never heard of, the International Network of Crisis Mappers. Online and on the ground, information was gathered, shared, and used by people who had never met, but had to act, right now. This is interop in the public interest, and it was fast, effective, and democratic. On the other hand, there are issues around privacy and security in which interop can spread problems quickly.
Palfrey and Gasser have written a clear and well-organized book about a topic many librarians will instinctively recognize (and, indeed, a chapter is devoted to sharing and preserving knowledge). But once you read it, you will see interop issues everywhere. In a world where we tend to specialize, they have done a good job of drawing from a wide range of disciplines, including communication studies, law, sociology, economics, and information science, to map out how culture and technology and our efforts to weave our systems into a coherent whole are complicated on many levels. The book includes a case study on how integrating our healthcare information systems could bring benefits – yet is so very hard to do for many reasons. There are more case studies available online through the Berkman Center for Internet and Society.
If you work in a library, either as an employee or doing research, you don’t have far to look for examples of interop at work - though chances are you won’t actually notice it until it doesn’t work. But it’s an issue that matters in all kinds of situations.
As the authors point out, so many of the choices we make today are part of a tangle of intersecting concerns. We can’t pull on one string to untangle them all, we can't patch them together with a little extra code. We need to understand the whole tangled mess. The authors argue that understanding interop processes might help us make better decisions about our interconnected world. “It should push us, as individuals and as societies, to acknowledge and address the costs and benefits of deep interconnection among technologies, data, humans, and institutions,” they write. I suppose this is why, as I write yet another memo about the need for staff to make the content of expensive proprietary systems full of proprietary content work together, I am also thinking about how to talk to our faculty about the open access movement, and strategize ways of making more of this information we handle available to all, a process full of institutional, personal, and cultural barriers. But we need to understand the problem in all its complexity. As the authors point out, it's only if we understand complex systems and what is at stake if they fail, that we will be able "to fashion the kind of world in which we want to live.”