If there is anything that I have learned in the course of using an iPad, it is how much I love my computer.
Two years ago I wrote a column for Inside Higher Ed entitled “The iPad for Academics.” Now, two years and two new models of iPad later, it seems time to revisit some of that original column: How well does it stand up, how did my predictions turn out, and what have I learned since then? The answers are, roughly, "good" "O.K." and "a lot."
When I wrote my column, no one was sure what the future held for the iPad, and there was serious skepticism about the more apocalyptic predictions. In fact, somewhat boringly, Apple's release of the iPad did what most Apple products do -- change the world, sell millions of units, and alter our information ecosystem irrevocably -- but it didn’t end the world.
In the two years that I’ve had the device, it has indeed become indispensable to me. It’s become my alarm clock, my radio, my television, my crossword puzzle, and above all (as I said in my original column) my reading device. I use it to read and read and read. It creates opportunities for reading I didn’t have before. In fact, I use it to read my own work -- the dreaded rush to print up conference papers finished moments before my panel has been replaced with a casual saunter to the podium, glowing digital copy of my paper in hand.
My iPad has excelled in forums where paper used to hold sway, and having it (or my iPod Touch) with me at all times means that I’ve discovered new times and places to do work. It’s great.
But it’s not a laptop, and it never will be.
As I and many other people have noted, the thing is for consumption, not production. I’ve tried using it to write and take notes -- using a Bluetooth keyboard with it, even using one of those cases with a keyboard built in, ridiculous little styluses, etc. There’s no way to get around the fact that the human body is not evolved to interact with a pane of glass. I can type faster than the keyboard can buffer, creating strings of illegible characters. At other times Apple’s pathetic spell checker stops me in my tracks.
And then there’s the interface. I suppose for some people the iPad’s interface works just fine. But once you’ve tasted the power of a multi-windowed environment with fully customizable keybindings, the iPad feels like a small, padded room. You can’t have two windows open at once in an iPad. Who can do serious academic work one window at a time? Not me, not any more -- and I’m not going back.
Perhaps I am one of the old generation who will someday be put to shame by nimble-fingered young’uns tapping expertly away on their nanometer-thick iPad 7s, but I don’t think so. People may get used to the limitations of the device, but that doesn’t mean that it’s better than what came before.
In fact, I see this as one of the dangers of the iPad. I see them everywhere on campus, and I wonder to myself: Are my students really getting through college without a laptop? Frankly, the idea seems horrifying to me. I don’t doubt that they can do it -- I worry what skills they are not learning because of the smallness (in every sense of that word) of the devices they learn on.
I don’t have a problem with students bringing their iPads to class, and there are times in some of my smaller seminars when we are all reading the text from our iPads. But even this consumption is starting to worry me. Does anyone really believe that digital textbooks are going to improve life for anyone except the textbook manufacturers? Like some mid-nineties wet dream of the content industry, digital textbooks have all of the DRM and none of the shareability of paper textbooks. And despite the potential of multimedia presentation, it’s not clear to me that they will prove to be anything more than a regular textbook with a few YouTube clips thrown in.
And the low prices? Remember when ATMs were first introduced and there was no fee for using them? Yeah, I don’t either -- it was so long ago the idea that improved service for free seems like a distant memory. The future the digital textbook market has planned for students is not, in my mind, a very bright one.
Two years down the road I’m glad that iPads exist, and I’m happy that most of the hype about them has been more or less borne out. It has a valuable place in our information ecosystem. The danger comes when the iPad becomes a replacement for other technologies that preceded our ubiquitous flat friend -- and still do their job better than it can.
Alex Golub is an assistant professor of anthropology at the University of Hawaii at Manoa.
The United Nations Educational, Scientific and Cultural Organization has issued guidelines to help countries promote open access to research findings. While the guidelines are not binding on member nations, they suggest that countries take a consistent and broad approach to assuring free access to research findings. The report with the guidelines also rejects the idea that because partial access is available or even full access to some work in some countries, that these issues have been resolved. "There is a problem of accessibility to scientific information everywhere," the report says. "Levels of open access vary by discipline, and some disciplines lag behind considerably, making the effort to achieve open access even more urgent. Access problems are accentuated in developing, emerging and transition countries. There are some schemes to alleviate access problems in the poorest countries but although these provide access, they do not provide open access: they are not permanent, they provide access only to a proportion of the literature, and they do not make the literature open to all but only to specific institutions."
If textbook affordability is the Holy Grail, then those of us who work in higher education are careening Monty Python-like as we search for it, stirring up unnecessary obstacles for ourselves all along the way.
Consider the dual paths we are taking. First, there’s the all-encompassing push to “go digital,” as if somehow the output format of a book, whether it is electronic or print, is the sole determinant of cost.
That is the wrong way of thinking. Input – the price of content – is much more important to the total cost of course materials than output – the format in which those materials are ultimately consumed by the student.
Then, there’s the push to “go open.” In recent years, as concern over textbook affordability has grown, this idea has received much attention, with “open educational resources” -- or “OER” materials, as they are often called -- leading the charge.
This too, seems attractive, but we are a long way from having OER content dominate the learning landscape, even if much of it is free. The creation of content by academic publishers is part of our literary and reporting traditions, and any system for delivering content to students should take both “free and open” and commercially produced materials into account.
In fact, the best chance to make an immediate and meaningful impact on the price of textbooks is to facilitate the merging of traditional and free content, allowing instructors to include exactly what is necessary, and freeing students from the rigid and expensive traditional offerings from academic publishers. In this model, “book” costs are lowered regardless of output format.
If we are cognizant of ways of merging different types of content in order to get the biggest academic bang for the buck, we must also be mindful of methods to access this content; to break it apart, to “disaggregate” it from the traditional bounds of textbooks and to present it to students in an effective manner.
Indeed, the main benefit of new technologies in education should be to provide more choice to instructors, and ultimately to students. If a professor can mix open content with chapters from relevant textbooks, timely journal articles, and up-to-the-minute news reporting, then he or she can truly provide a unique “book” to students, untethered from the rigidity of the traditional offerings from academic publishers.
Textbook affordability has been a hot topic for at least a decade, but it has grown even hotter since the 2008 market meltdown, which greatly affected Americans’ spending power at the same time that the cost of college – already rising – began to skyrocket. Various Band-Aid solutions have emerged in response to textbook costs, with rentals the “in” solution for awhile and even the longstanding “gray market” of purchasing textbooks on international versions of websites, where the cost of some books in Europe can be materially lower than those in the U.S.
More and more students, at least anecdotally, are taking the route of “book sharing,” mixing and matching content among themselves rather than paying the significant freight asked of them by the colleges and universities they attend. That behavior is, in itself, a form of disaggregation, for it is breaking the traditional one-to-one relationship between student and assigned book.
But the disaggregated model I foresee is the one that we have been building for the past year at AcademicPub. It allows the professor to comb for the very best content in his or her discipline, mix and match that content into a consistently presented and compelling narrative or set of chapters, and to deliver the completed product to students in the format that the student prefers -– print or digital, whichever method leads to the best learning result for that student.
By all means let’s aspire to make the materials we assign our students more affordable, but we mustn’t fall victim to any “magic bullet” scenarios. Actions which fail to account for the cost of content will fall short. Failure to account for the value and ubiquity of existing texts from leading scholars through traditional publishers won’t cut it either. Going digital alone won’t lower the cost of textbooks, but disaggregating content just might work.
Caroline Vanderlip is CEO of SharedBook, Inc., which launched AcademicPub (TM), in April 2011.
Professors at the University of Ottawa, in Canada, want the right to bar laptops from their classrooms, CTV Ottawa News reported. Marcel Turcotte, one of the professors pushing the idea, said of his students: "They are distracted and we are competing with that for their attention.... You see one student who is really not listening, would be watching the video and then it's kind of contagious." A faculty vote is planned for May.
The National Science Foundation, the National Institutes of Health and other federal agencies plan to announce today a major new research program focused on big data computing, The New York Times reported. The agencies will pledge $200 million for the effort.
A day after Blackboard announced its acquisition of two prominent Moodle partners and the creation of an open-source services arm, various Web discussion boards were abuzz with chatter about the implications. At Moodle.org's official "Lounge" forum, some open-source advocates lamented what they read as a corporate intrusion on the open-source community -- prompting Martin Dougiamas, the founder and lead developer of Moodle, to defend his decision to lend moral support to Blackboard’s takeovers of Moodlerooms and NetSpot.
“Moodle itself has not, and will not, be purchased by anyone,” Dougiamas wrote to a discussion thread. “I am committed to keeping it independent with exactly the same model it has now.” While the new Blackboard subsidiaries and their clients have produced many helpful modifications to Moodle’s code, “it's always up to me to include [modifications] in core (after it gets heavily reviewed by our team),” Dougiamas said, “otherwise it goes into Moodle Plugins.” He added that Moodle still has dozens of other partner companies that are not owned by Blackboard.
Charles Severance, another big name in the open-source movement who not only endorsed the deal but has been hired to work with Blackboard’s new open-source services division, expanded on the implications of the move in a post on his own blog. “The notion that we will somehow find the ‘one true LMS’ that will solve all problems is simply crazy talk and has been for quite some time,” Severance wrote. “I am happy to be now working with a group of people at Blackboard that embrace the idea of multiple LMS systems aimed at different market segments.” The watchword of this era of multiple learning platforms per campus, he said, is interoperability, and that will be a priority for him in his new capacity with Blackboard. (This paragraph has been updated since publication.)
Severance assured the open-source community that contributions he makes to Sakai on Blackboard company time will remain open, and that he “[doesn’t] expect to become a developer of closed-source applications.”
In 2010, the National Science Foundation and National Endowment for the Arts convened a historic workshop -- it was their first jointly funded project. This meeting marked the beginning of a new level of national conversation about how computer science and other STEM disciplines can work productively with arts and design in research, creation, education, and economic development. A number of projects and follow-up workshops resulted in 2011. I was lucky enough to attend three of these events and, in the midst of all the exciting follow-up conversations, I couldn't help but wonder: What about the digital humanities?
After all, the digital humanities have made it now. A recent visualization from University College London shows more than 100 digital humanities centers spread across the globe. There are dedicated digital humanities funding groups within the National Endowment for the Humanities and Microsoft Research. The University of Minnesota Press published a book of Debates in the Digital Humanities in January.
So why doesn't the digital humanities have more of a seat at the table? Why is there the stereotype that, while computer scientists and digital artists have much to discuss, digital humanists only want to talk about data mining with the former and data visualization with the latter? I believe it is because the perception has developed, helped along by many in the field itself, that digital humanities is primarily about data.
Certainly a grasp of data -- the historical record, our cultural heritage -- is a great strength of the humanities. But in the digital world, the storage, mining, and visualization of large amounts of data is just one small corner of the vast space of possibility and consequence opened by new computational processes -- the machines made of software that operate within our phones, laptops, and cloud servers.
A key experience in my journey to understanding this began with a debate about James Meehan's Tale-Spin, the first major story generation system. I had always been basically uninterested in Tale-Spin, though I knew it was considered a landmark on the computer science end of electronic literature. I simply didn't get excited by the stories I had seen reprinted in the many scholarly discussions of the system.
During the debate it became clear that I would have to look a little deeper. When I looked at Tale-Spin's computational processes, what I found was surprising and complex, as evocative and strange as any of Calvino's invisible cities. Tale-Spin operates according to rules constructed as a simulation of human behavior, built according to cognitive science ideas that were current at Yale in the mid-1970s, when it was designed. For example, in this model, when characters interact, they take elaborate psychological actions, projecting multiple possible worlds to see if any course of action might create a world they desire.
In short, I learned that it is Tale-Spin's processes that have the literary value, creating a fictional world that gets its fascinating strangeness from taking a recognizable aspect of human behavior, exaggerating it, and stripping away almost everything else -- answering the question, "What would fiction look like if we accept the model of humanity being proposed by this kind of cognitive science?" More broadly, reading the processes of Tale-Spin also helped me think about the limits of simulations of human behavior, even those informed by the most recent scientific ideas, as well as how ideas and biases can be encoded in software in ways that are invisible to those who only see the output.
Finally, it helped me learn an important lesson about making media: fascinating, successful, hidden processes do little to make the audience experience stronger. As a result of these realizations I had to apologize to colleagues for dismissing Tale-Spin -- and my fascination with the project grew until it became a central object of study for my book Expressive Processing.
Over the years since, it has become clear to me that there are many other processes that cry out for attention. All the tools of our software society, from the document-crafting Microsoft Word to the architecture-designing AutoCAD, are enabled and defined by processes. Software processes operate Walmart's procurement system and Homeland Security's terrorist watch list. The interactivity of mobile apps and websites and video games is created through the design of processes. In other words, it is human-designed and human-interpretable computational processes that enable software to shape our daily work, our homes, our economy, our interpersonal communication, and our new forms of art and media. Processes even enable the data mining that drives much digital humanities work (and Amazon's recommendation system).
For these reasons and more, when computer scientists and digital artists get together, most of what they talk about is novel processes. Why invite digital humanists, if they're going to keep dragging the conversation back to data?
Of course, this stereotype is a distortion of the history and present of humanist engagement with the digital world, but it passes for truth far too often. Something needs to be done to fight it. I believe all of us with a stake in the future of the digital humanities -- and perhaps more of us have a stake than realize it at the moment -- should push for a vision of the field that acknowledges that it has never simply been about data. Here are two areas where I think pressure is particularly important.
First, the humanities is not simply defined by the data it has mastered. Whether in literature, philosophy, media studies, or some other discipline, humanists understand the data they study through particular methods. Two decades ago Phil Agre powerfully demonstrated that humanities methods could shed important new light on software processes. In his Computation and Human Experience, he performs close readings of computational systems and situates them within histories of thought. His analysis serves a primary humanities mission of helping us understand the world in which we live, while also helping reveal sources of recurring patterns of difficulty for computer scientists working in AI.
It is an early example of what is now increasingly being called "software studies" -- a tradition in which my work on Tale-Spin participates. In software studies, humanities methods and values engage with the specific workings of computational processes. This sort of approach has the potential to become an exciting point of connection between the humanities and computer science, both pedagogically (as a route to the "computational thinking" that is increasingly being put forward as a key component of 21st-century general education) and as a critical and ethical complement to the models of interpreting processes found in most computer science.
The good news is that work of this sort is already becoming more established, with the MIT Press having recently founded both a book series for software studies and one for its sibling "platform studies" (which focuses on the material conditions that shape and inspire the authoring of computational processes). The promise of software studies is that the digital humanities can be central to one of the most pressing issues of our time: helping us both to understand and to live as informed, ethical people within a world increasingly defined and driven by software.
And we can also go further, helping to create this world. More than a quarter-century ago, Brenda Laurel's dissertation established how deep knowledge of subject matter developed within the humanities -- in Laurel's case, classical drama -- could be used to inform the design of new technologies. Laurel became a leading creator and theorist of digital media by adapting insights and models from a long history of humanities scholarship on the arts. Such work is, if anything, even more vital today -- and is the second area of digital humanities which I believe we should press forward. With the rise of computer games as a cultural and educational form (along with other emerging media technologies) computer scientists are increasingly being called, both in universities and industry, to develop computational processes that make new forms of media possible.
But computer science has no knowledge or methods appropriate for guiding or evaluating the primary, media-focused aspects of this work. Computer science's next level of dialogue with the digital arts community is certainly encouraging, but there is also an essential role for the humanities to play in both contributing to innovative media technology projects and helping set the agenda. Unfortunately, unlike software studies, this area of digital humanities work does not yet have a name and is often not even identified as humanities, despite its deep grounding in humanities knowledge and methods (the scholars involved generally also have identities as digital artists/designers or computer scientists).
But the importance of addressing this lack is becoming clear. In fact, I am happy to announce that an unprecedented group of partners (including the NSF, NEH, NEA, and Microsoft) have stepped forward to help convene a workshop on this topic that Michael Mateas, Chaim Gingold, and I will host at UC Santa Cruz later this year. Our planned outcomes range from developing a greater understanding of this area of digital humanities to matchmaking a set of projects that are explicitly at the intersection of computer science, digital arts, and digital humanities.
Now for the bad news. Unfortunately, as digital humanities is coming to public consciousness, the vision of the field being put forth in the most high-profile venues leaves out entirely such possibilities as these. In January, Stanley Fish wrote in The New York Times that digital humanities is concerned with "matters of statistical frequency and pattern," and summarized digital humanities methodology as "first you run the numbers, and then you see if they prompt an interpretive hypothesis." Earlier in January, at the Modern Language Association mega-conference, a workshop on Getting Started in Digital Humanities suggested that the field's promise lies in the fact that "Scholars can now computationally analyze entire corpora of texts or preserve and share materials through digital archives."
How will digital humanities ever come to be something more diverse and relevant if both detractors and supporters seem to agree that its sole focus is storing and analyzing data? I believe digital humanists must begin by recognizing and developing important areas of work, already part of the field's history, that such conceptions marginalize. And those in the field must see these areas as important places for digital humanities to grow, even if they lie beyond the narrow confines of the wall digital humanists are inadvertently helping build around themselves.
Noah Wardrip-Fruin is associate professor of computer science and co-director of the Expressive Intelligence Studio at the University of California, Santa Cruz. His most recent book, Expressive Processing, has just been published in paperback.