The poster session is an important but usually humble component of an academic conference -- though you’d never know that from a promotional video for one held at the University of Oxford this month. The clip looks like the trailer for a sci-fi Hollywood blockbuster. The name of the conference, Force 2015, sounds like one, too.
Besides its snappy acronym, the Future of Research Communication and e-Scholarship group (“a community of scholars, librarians, archivists, publishers and research funders”) has a manifesto offering a comprehensive vision of post-Gutenbergian intellectual life. Issued in 2011, it forecasts “a future in which scientific information and scholarly communication more generally become part of a global, universal and explicit network of knowledge; where every claim, hypothesis, argument -- every significant element of the discourse -- can be explicitly represented, along with supporting data, software, workflows, multimedia, external commentary and information about provenance. In this world of networked knowledge objects, it would be clear how the entities and discourse components are related to each other, including relationships to previous scholarship; learning about a new topic means absorbing networks of information, not individually reading thousands of documents....”
The new Web site 101 Innovations in Scholarly Communication may not have been intended as an interim report on how that future is shaping up, but it has the features of one even so. It’s the online complement to the Force 2015 poster of the same name, prepared by Jeroen Bosman and Bianca Kramer, both from Utrecht University Library in the Netherlands. (Bosman is the subject librarian in the geosciences; Kramer, in the life sciences and medicine.)
The most striking element of both the poster and the site is a multicolored circular chart that looks something like a zodiac or gaming wheel. (See bottom of this article for a larger version than appears on top.) It flashes by in the opening seconds of the aforementioned video, too fast for the viewer to notice that it is divided into six sectors: discovery, analysis, writing, publication, outreach and assessment. There are little logos in each, representing digital tools and products. You find the Google Docs icon in “analysis,” for example, and Zotero in “writing,” while academia.edu appears in “outreach.”
It’s the Great Cycle of Research Life, so to speak -- beginning with, and ever returning to, the zone marked “discovery.” It would be possible to argue with how sequential the process is in real life, and I’m not persuaded that some of the icons fit perfectly into their assigned slots. But another element of the chart’s design adds to its value by conveying the pace of change. The circle actually consists of concentric circles, marking off five-year intervals between 2000 and 2015. The location of an icon indicates when it came into use, with a very few, in the chart's core, having been around way back in the 1990s.
After contemplating the 101 Innovations mandala for a while, I contacted the site's creators in hopes of understanding its mysteries. At a poster session, there’s usually someone around to explain things only implicit in the poster itself, which can otherwise be puzzling.
That’s true especially of the diagrams making up the site’s “workflow pages.” Each resembles an extremely simple flowchart: a series of boxes, representing the six phases of research, with various logos plugged in. (Rather than endure a thousand words of description, just go have a look.) The charts also had labels such as “traditional” and “innovative.”
The parts each made sense, but the whole seemed opaque. Kramer elucidated things in an e-mail discussion, with some of her responses prepared jointly with her collaborator, Bosman. The site represents the tip of an iceberg: they have collected a database “contain[ing] details of some 15 attributes of over 400 innovative tools and sites,” most of which didn’t make it to the poster or Web site. “We are curious [about] the range of innovation,” they told me, “not the entire range of products per se.”
My best guess had been that the workflow charts might have been intended as recommendations of how researchers could combine the available digital tools. That, it turns out, was wide of the mark. The charts are heuristic rather than prescriptive.
“None of the workflow charts are meant as templates for researchers to adopt,” Kramer and Bosman explained, “more as primers for them to think about the tools they use and the type of workflow that best characterizes the way they work.” The charts provide “a starting point for discussions with researcher groups, such as graduate students, postdocs and faculty,” in order to determine existing practices and developing needs.
The goal is to elicit users’ “reasons for choosing specific tools -- what factors influence their decisions to switch to new tools and incorporate them” in their work. “[W]e plan to have a closer look at the coverage of digital humanities tools in our database, and look at disciplinary variations in our interpretations of most important developments, opportunities, etc.”
Bosman and Kramer also developed a typology of scholarly workflows, ranging from the neo-Luddite to the way-early adopter. “[W]e defined 'traditional' as a type of workflow that essentially would not have altered much from that of the print age, ‘modern’ as making use of online tools that enable researchers to consume information/functionality (roughly Web 1.0), ‘innovative’ as using more recent tools that enable online discussion, collaboration and active contribution (roughly Web 2.0) and ‘experimental’ as using tools that are currently being developed and have yet to establish themselves (‘bleeding edge’).”
The charts mention “virtual suites,” with no explanation. That turns out to be a reference to the shape of things to come: integrated packages of tools covering every stage of the research project, from brainstorm through the publication of scholarship and the archiving of data.
“My impression,” wrote Kramer, “is that publishers/organizations are working more and more towards providing tools for all stages of the workflow, and will probably start marketing them as such in the future. It is of course up to any individual researcher to decide whether she/he would want to use such a suite in its entirety, but it seems to be to the benefit of the publisher to offer the possibility, and convince institutions to buy into the whole package deal. Such developments would encourage siloing of workflows, with potential limited interoperability with other tools and thus lock in to a specific publisher/organization. This is not necessarily a good thing.”
Agreed. The next step would be for researchers to sign over their own brains to the company providing the suite, which seems like carrying the principle of intellectual property altogether too far.
“On the other hand,” Kramer pointed out, “we found that many new tools have been developed by researchers at the Ph.D./postdoc level (interestingly, many of them biomedical or bioinformatics scientists) who are frustrated with the current solutions available to them. Another trend we observe is that once these innovations prove useful and popular, they are often bought by large publishers.”
So in the foreseeable future “there will remain a mixture of new, independent innovations and consolidation of existing tools, often in a publisher's ‘suite.’”
The alternative would be a large-scale return to paper and ink. Some of us wouldn’t mind, but nobody should count on it.
A year out from his own run through the annual meeting gauntlet, Christopher Garland offers tips on being prepared, impressing the search committee -- and avoiding that last-minute meltdown in the elevator.
For a rising generation of administrators in higher education, the heart of education is innovative technology -- and faculty get in the way.
In a recent speech, the new president of Carnegie Mellon University, Subra Suresh, intimated his administrative philosophy, remarking that, “the French politician Georges Clemenceau once said that, ‘War is too important to be left to the generals.’ Some would argue learning is too important to be left to professors and teachers.”
The speech opened the inaugural meeting of the Global Learning Council (GLC), held at Carnegie Mellon in September. The GLC brings together a group of high-level university administrators, government officials, and corporate executives who aspire to be an at-large advisory group, akin to the National Research Council, for higher education.
Suresh could have used the help of an English professor to unpack the analogy. Presidents and provosts would be generals, not faculty, who are the soldiers in the trenches, so the fitting parallel would actually be “education is too important to be left to administrators.”
On that count, I agree.
Suresh’s phrasing was not a slip but a frank statement — for him, faculty have little place in decision-making. And I think that it captures the leaning of many current initiatives touting innovation and technology.
The classic definition of the university is that it represents the corporate body of the faculty. Like the protagonist of Flannery O’Connor’s Wise Blood, who wants to establish the Church of Christ without Christ, the New Leaders of higher education want to establish education without educators. Or more precisely, they want to call the shots and faculty to do what they're told, like proper employees. To wit, at the conference there were few regular faculty member in attendance (even if some of the administrators had started as or occasionally did guest spots as professors, it’s probably been a while since they devoted much of their work time to that realm), and there was certainly no social or cultural critic of higher education scheduled to speak. Rather than engaging much criticism or debate — which, after all, is a mission of the university, testing ideas — it had the character more of an infomercial.
The focus of the conference was to install technology in higher education as fast as possible, and the speakers included high-level figures from Google, Kaplan, edX, and various other companies with a financial interest in the changeover.
The only speaker who raised doubts about technology was a military person, Frank C. DiGiovanni, director of force readiness and training in the U.S. Office of the Undersecretary of Defense. In his talk he said that he found that, to be effective, education needs to “stimulate the five senses,” which does not happen with devices. In fact, he noted that there was a “loss of humanity” with them. He added in subsequent discussion: “I worry about technology taking over. The center of gravity is the human mind.”
It seemed a little ironic to me that the only person reminding us of a humanistic perspective was the military man, though it was clear that DiGiovanni had a good deal of experience with how people actually learned and that he cared about it.
The innovation mantra has been most prominently expressed by the business guru Clayton Christensen, who coined the phrase “disruptive innovation.” It has been the credo especially of tech companies, who come out with ever-new products each year. The theory is that businesses like the American steel industry have failed because they were set in their ways, doing things that were successful before. Instead, even if successful, they should disrupt what they’re doing. Hence, while Apple was making very good laptops, they went to the iPhone. Then to the iPad. Then to the Apple Watch.
Christensen has extended his theory to academe, in articles and his 2011 book, The Innovative University: Changing the DNA of Higher Education from the Inside Out (co-written with Henry Eyring). He basically sees higher education as set in its ways (hence the DNA metaphor) and ripe for a takeover by technology, and he holds up universities such as BYU-Idaho and the for-profit DeVry University as models for the future. He admits that Harvard University is still top of the line, but not everyone can go to Harvard, so, in cheery rhetoric (some of which is taken from the promotional literature of the colleges themselves), he sees these other schools doing what Walmart did to retail.
Christensen’s theory of innovation has been rebutted by Jill Lepore in a recent piece in The New Yorker,“The Disruption Machine.” She points out that most companies succeed because of sustainable innovation, not disruptive. Apple, after all, still makes laptops, and US Steel is still the largest steel company in the US. In addition, she goes on to demonstrate that a good deal of Christensen’s evidence is thin, not to mention that many of his examples of success have gone belly-up.
Besides holes in the general theory, it’s also questionable whether the kind of innovation that applies to technological or commodity products is readily translatable to education. Cognitivists have shown that education largely works affectively, through empathy, which requires live people in front of you. One learns by imaginatively inhabiting another’s point of view.
Moreover, most institutions of higher education have a different role than businesses — more like churches, which in fact is the analogy that helped establish their independent legal status in the 1819 Dartmouth decision of the U.S. Supreme Court. Something other than consuming goes on at universities, which gets lost in the ommercial model of higher ed.
Think of it this way: while I like to shop at Macy’s and hope it stays in business, I would not donate any money to it, whereas I have to universities and churches. Of course universities should use best business practices, but if they act primarily as a business, with a saleable product and positioning students as customers, then they abnegate this other role. This is an inherent contradiction that vexes the push to commercialize higher education.
This is not to say that there is no use for technology. The Online Learning Initiative, a project studying statistics pedagogy at Carnegie Mellon, shows that some online segments work better than large lecture sessions. But, if you read their reports, it’s clear that the experiment essentially offers a flipped classroom, and in fact students probably gain more faculty contact than in the lecture model. It’s more like a return to a tutorial model. Who knew students do better with professors?
What the rush for innovation is really about, as Christopher Newfield, a leading critic of higher education, has pointed out, is not a better theory of change but a theory of governance. As Newfield puts it, “it isn’t about what people actually do to innovate better, faster, and cheaper, but about what executives must do to control innovative institutions.” It’s all about top-down plans of action, with the executive issuing a plan to disrupt what you’re doing, and subordinates to carry it out. Hence Suresh’s brushing aside those pesky faculty, who traditionally decide the way that education should be. That might be O.K. for a corporation, but it violates any standard idea of shared governance and academic freedom, which holds that faculty decide the direction of education.
It’s also about politics. The vision of higher education that the New Leaders of higher education would like to install is not a traditional horizontal institution, in which faculty are generally of equal power. (For instance, I’m a professor at Carnegie Mellon like Suresh, so technically I have the same faculty rights and determine the content of my courses and research, not him — and fortunately I have tenure, so he can’t fire me for writing this, which he could if it were a regular corporation.) Rather, it has become an oligarchical institution, reliant on business deals and donations. Business corporations, after all, are not democracies but oligarchies, with decisions running from the owners and executives downhill.
The oligarchical leaning of the New Leadership became clear to me in a talk by Luis van Ahn, a young computer scientist at Carnegie Mellon and MacArthur Award winner. Van Ahn was animated and funny, bringing fresh energy to the proceedings. He evidently had made a killing in developing CAPTCHAs, those difficult-to-decipher wavy letters to verify you’re a human and not a bot online (in his PowerPoint he showed a picture of a man lying in a bed of money, which drew a lot of laughs).
Since then, he has developed and is CEO of Duolingo, a nonprofit designed to bring language training to people for free (or more precisely for their labor). It’s all online, and it’s self-funding: Duolingo sells crowdsourced translations from students to CNN or other businesses in need of them, and the money keeps the company going.
Van Ahn had several tenets of education, the first of which was that “the best education money can buy should be free.” I was with him on that, but I was not so sure about the rest.
One was that the best education should, “Be in your pocket, not in some building.” Again, if education relies on social contact and empathy, then we need a place for it other than the shallow contact of a screen. Think of it from the bottom up: children learn from the synesthesia of sociality, and those who are regularly read to by parents learn to read the soonest. What would a child be like if you locked him or her in a room with a device?
Moreover, while a program like Duolingo might be good for picking up a reading knowledge of a foreign language, I wonder about its transposition to speaking. While van Ahn attests to good testing results online, languages, after all, are not formulae but social. Anyone who has learned a foreign language knows that it’s a much different experience when you’re there, in front of live people.
Still, Duolingo seems like a good thing and an exemplary use of online. However, van Ahn had another tenet: that learning should be through a corporation, not through a government. He said that you cannot trust governments (most “suck” and “other people’s funding usually comes with other people’s ideas and influences”), which he drew from personal experience as an immigrant from Guatemala. That might be understandable in his individual case, but is deeply troubling to anyone who has a Jeffersonian sense of higher education and believes that it should be a public right and to cultivate citizens.
It boggles the mind to think that corporations would be better. What are the guarantees that they would be more free from “other people’s ideas and influences,” particularly of just a few people?
Perhaps if van Ahn is running them. (And still, he sold his previous project to Google, and one might question Google’s proprietorial policies, which we have little recourse to alter.) Governments presumably are based on the will of the people, whereas corporations are based on the will of their owners, boards, and executives, oriented toward gaining the most advantage for themselves. A poor government might fail to represent the will of its people, but the problem then is the lack of democracy. By definition, corporations represent a small, self-interested group.
While van Ahn seems like an admirable person and has put some of his money into good causes, his statement was the credo of plutocracy: the rich and powerful should rule, and their good effects might trickle down. But I don’t trust corporations as much as he does, particularly since they have brought us our current world of severe inequality.
American higher education was conceived as a remedy to inequality in the period after World War II, with policy documents like the 1947 Truman Commission Report setting out a plan to fight inequality “in so fundamental a right as education,” spurring state and federal funding to expand high-quality public colleges and universities and allow a greater number of citizens to attend them for minimal tuition.
The new technology reinstalls inequality, with the wealthy (and a few high-scoring poor) receiving bespoke higher education at elite schools, but most of the rest getting theirs on a screen — with great graphics! like a game!