It's taken a while, but we’ve made a little progress on the mathesis universalis that Leibniz envisioned 300 or so years ago – a mathematical language describing the world so perfectly that any question could be answered by performing the appropriate calculations.
Aware that the computations would be demanding, Leibniz also had in mind a machine to do them rapidly. On that score things are very much farther along than he could ever have imagined. And while the mathesis universalis itself seems destined to remain only the most beautiful dream of rationalist philosophy, there’s no question that Leibniz would appreciate the incredible power to store and retrieve information that we’ve come to take for granted. (Besides being a polymathic genius, he was a librarian.)
Johanna Drucker’s Graphesis: Visual Forms of Knowledge Production, published by Harvard University Press, focuses in part on the capacity of maps, charts, diagrams, and other modes of display to encode and organize information. But only in part: while Drucker’s claims for the power of visual language are less extravagantly ambitious than Leibniz’s for mathematical symbols, it is a matter of degree and not of kind. (The author is professor of bibliographical studies at the Graduate School of Education and Information Studies of the University of California at Los Angeles.)
“The complexity of visual means of knowledge production,” she writes, “is matched by the sophistication of our cognitive processing. Visual knowledge is as dependent on lived, embodied, specific knowledge as any other field of human endeavor, and integrates other sense data as part of cognition. Not only do we process complex representations, but we are imbued with cultural training that allows us to understand them as knowledge, communicated and consensual, in spite of the fact that we have no ‘language’ of graphics or rules governing their use.”
Forget the old saw about a picture being worth a thousand words. Drucker’s claim is not about pictorial imagery, as such. A drawing or painting may communicate information about how a person or place looks, but the forms she has in mind (bar graphs, for example, or Venn diagrams) perform a more complex operation. They convert information into something visually apprehended.
We learn to understand and use these visual forms so readily that they seem almost self-evident. Some people know how to read a map better than others -- but all of us can at least recognize one when we see it. Likewise with tables, graphs, calendars, and family trees. In each case we intuitively understand how the data are organized, if not what they mean.
But the pages of Graphesis teem with color reproductions of 5,000 years’ worth of various modes of visually rendered knowledge – showing how they have emerged and developed over time, growing familiar but also defining or reinforcing ways to apprehend information.
A good example is the mode of plotting information on a grid. Drucker reproduces a chart of planetary movements in that form from 10th-century edition of Macrobius. But the idea didn’t catch on: “The idea of graphical plotting either did not occur, or required too much of an abstraction to conceptualize.” The necessary leap came only in the early 17th century, when Descartes reinvented the grid in developing analytical geometry. His mathematical tool “combined with intensifying interest in empirical measurements,” writes Drucker, “but they were only slowly brought together into graphic form. Instruments adequate for gathering ‘data’ in repeatable metrics came into play … but the intellectual means for putting such information into statistical graphs only appeared in fits and starts.”
And in the 1780s, a political economist invented a variation on the form by depicting the quantity of various exports and imports of Scotland as bars on a graph – an arresting presentation, in that it shows one product being almost twice as heavily traded as any other. (The print is too small for me to determine what it was.) The advantages of the bar graph in rendering information to striking effect seem obvious, but it, too, was slow to enter common use.
“We can easily overlook the leap necessary to abstract data and then give form to its complexities,” writes Drucker. And once the leap is made, it becomes almost impossible to conceive such data without the familiar visual tools.
If the author ever defines her title term, I failed to mark the passage, but graphesis would presumably entail a comprehensive understanding of the available and potential means to record and synthesize knowledge, of whatever kind, in visual form. Drucker method is in large measure inductive: She examines a range of methods of presenting information to the eye and determines how the elements embed logical concepts into images.
While art history and film studies (especially work on editing and montage) are relevant to some degree, Drucker’s project is very much one of exploration and invention. Leibniz’s mathesis was totalizing and deductive; once established, his mathematical language would give final and definitive answers. By contrast, graphesis would entail the regular creation of new visual tools in keeping with the appearance of new kinds of knowledge, and new media for transmitting it.
“The ability to think in and with the tools of computational and digital environments,” the author warns, “will only evolve as quickly as our ability to articulate the metalanguages of our engagement.”
That passage, which is typical, is some indication of why Graphesis will cull its audience pretty quickly. Some readers will want to join her effort; many more will have some difficulty in imagining quite what it is. Deepening the project's fascination, for those drawn to it, is Drucker's recognition of an issue so new that it still requires a name: What happens to the structuring of knowledge when maps, charts, etc. appear not just on a screen, but one responsive to touch? The difficulties that Graphesis presents are only incidentally matters of diction; the issues themselves are difficult. I suspect Graphesis may prove to be an important book, for reasons we'll fully understand only somewhere down the line.
The dominion of open educational resources is apparently looming large, if one were to judge by a blog thread touched off with a panel discussion at a recent Knewton event. David Wiley, participating in the panel, made the bold claim that “in the near future, 80 percent of textbooks would be replaced by OER content.” Jose Ferreira responded critically to that view a few days later with a blog post, to which Wiley offered a dissenting reply. Michael Feldstein then weighed in with a dissenting perspective of his own.
It’s a spirited and fruitful discussion; well worth a read. Their comments, though, didn’t tackle what I’ve come to see as the core issue for the OER movement, a foundational assumption that has crimped its progress. The assumption holds that because open-source educational content is like open-source software -- in that it’s free content that you can chop up, remix, and share with anyone -- its application and uses should follow in a similar way.
The short history of the two movements makes clear that this is not the case. As David Wiley points out, the first openly licensed educational materials were published more than 15 years ago, around the time that Linux led the movement of open-source software (OSS) into the mainstream. So why did one open-source movement take off as the other tarried on the margins, championed only by the most stalwart advocates?
While Linux has long been part of standard practice, and our daily computing lives would be unthinkable without open-source software, more than 90 percent of faculty textbook adoptions in the U.S. are still locked-down, expensive commercial materials. Most don’t doubt the unsustainability of the present course (including most publishers), but it’s also plain to see that the OER movement had not yet offered a truly satisfying alternative. The failure of OER to become mainstream at this point is only underscored by the myriad forces working in its favor: economic pressures, greater administrative accountability, government oversight and budget cuts, and a truly broken publisher model.
A clear reason for the different trajectories is the commercial support that OSS has enjoyed, and that OER has not. Contrary to the common view that OSS has advanced largely through loosely organized communities of volunteers, it’s actually often strongly supported through private enterprise. More than 80 percent of the contributions to Linux, for example, come today from companies like Google and Samsung. But the success of OSS isn’t simply through commercial appropriation. Instead, companies were able to support OSS because they were building on an already-present foundation of voluntarism in the hacker community. While a volunteer community of course exists in OER, it does not have the depth and breadth of its OSS counterpart. The voluntarism of the hacker community does not, in other words, map well onto the community of academic instructors.This situation isn’t an accident of history but reflects a fundamental difference in the roles and self-understanding of each group.
With OSS, the hacker is often an end user but more centrally the creator and modifier of code. And to the extent that hackers form a community, it is a community of problem-solvers addressing issues that concern their work directly. In his seminal book on hacker open-source culture, The Cathedral and the Bazaar, Eric Raymond suggests that “Every good work of software starts by scratching a developer’s personal itch.” Contrast this with the relationship faculty have to the educational content they use: for most, it’s a tool for teaching a class, a means of supporting an activity that is largely extrinsic to the tasks of creating and modifying pedagogical content. Most instructors are not editors, let alone creators of their classroom content; they are simply end users.
If there’s a personal itch to scratch at all, it’s usually in the area of original scholarship and research, not teaching materials (let’s recall that the Internet was born to share research, not lesson plans). For most instructors, the textbook is a convenient package, without which the task of managing a class would be that much more laborious. Commercial publishers have long recognized what the OER movement has not: that often-overworked and underpaid instructors are looking to content and course technology to make their lives easier, not to take on the additional responsibility of managing their own content without financial recognition for that labor. Unlike the open-source hacker, the thrill of belonging to a community of problem-solvers of content simply isn’t their thing. To truncate an otherwise large topic, instructors are not hackers and that changes everything. Or it should have for the OER movement.
The recent gains of, and the growing prospects for, OER are, in fact, a tacit acknowledgement of this difference. No doubt the single biggest success to date for the movement is the OpenStax project, but this success breaks any illusion that the practice of OER is analogous to that of open software. Connexions, the OpenStax predecessor project at Rice, languished for years as an open-source content platform until Rice hired Joel Thierstein as associate provost to turn the project around. What did he do? Thierstein, who previously worked in the private sector developing content for the telecommunications industry, had a simple and very powerful idea: raise grant money to hire the same companies that ghostwrite textbooks for the traditional publishers, and then release the texts into the public domain under the most open license available.
As commercial textbook equivalents, their use required no behavioral changes for faculty. They would not be “learning objects” or fragments that required additional faculty work. Faculty could use them as teaching tools, just as they would conventional content, except, in this case, they’re free. Like the commercial publishers, Thierstein rightly understood that faculty want an easy and straightforward way to adopt high quality and appropriate content. Thierstein’s success enabled Rice to go forward with additional fund-raising and the Connexion’s rebranding as OpenStax. A simple idea has had a significant impact.
And yet for all the success of OpenStax, it’s also clear that a free version of a commercial text will never alone be sufficient for OER to reach the mainstream, nor should it be. Some learning technologies, either already in use or emerging, have the capacity to improve student success significantly. The OER movement’s almost singular focus on cost can obscure the larger objective -- actually getting more students through to graduation while ensuring that they’ve learned (and enjoyed learning) something along the way.
The risk for the OER movement is that it unwittingly reinforces the kind of resource disparities we see everywhere else in our society: a situation in which the well-off enjoy content with the latest technologies and practices, and the not-so-well-off manage without them. To be sure, OpenStax partnerships with third-party technology partners are a recognition of this need, but these relations are still established within the traditional publisher/tech partner binary model, with the difference that the core content is low-cost or free. As important as that project is, it doesn’t yet realize the promise of OER as disaggregated high-quality content created and modified from anywhere.
A better way forward is to compensate the stakeholders -- faculty, copyright holders, and technologists, principally -- for their contributions to the OER ecosystem. This can be done by charging students nominally for the OER courses they take or as a modest institutional materials fee. When there are no longer meaningful costs associated with the underlying content, it becomes possible to compensate faculty for the extra work while radically reducing costs to students. While I launched a new venture to do this, what’s needed are lots of entities -- for-profit and nonprofit -- to experiment with funding models. It’s all achievable and there will likely be no single way to accomplish it.
From this will emerge a new breed of courseware, one that preserves the low cost and flexibility of open content while embracing learning technologies that support faculty and student success. Certainly such a model involves costs, though not so much for the content as for the tools that improve its use and for the people on the ground who are actually doing the work of curating and adapting materials. Align the incentives in the right way, and this model of for openness can empower faculty members and institutions in unprecedented ways. It will encourage local innovation so that, over time, the courseware, now unlocked and financially supported, becomes an expression of the teaching itself.
Openness, then, lends itself to a new order of distributed content development that includes outstanding learning technologies; I think all the bloggers mentioned above recognize this. But precisely because instructors are not hackers and belong to an entirely different community of practice, a system for distributed content development also needs to be accompanied by a system of distributed financial incentives. When this all comes together -- and it will -- then courseware will escape commodification and become a creative and low-cost force in education. Only then should we begin to count the percentages.
Are students evaluated on their academic work, or on how well they navigate the college environment? Both, a recent book argues -- which is why mentoring programs should aim to unmask the "hidden curriculum" for at-risk students.
Like a t-shirt that used to say something you can’t quite read anymore, a piece of terminology will sometimes grow so faded, or be worn so thin, that retiring it seems long overdue. The threadbare expression “socially constructed” is one of them. It’s amazing the thing hasn’t disintegrated already.
In its protypical form -- as formulated in the late 1920s, in the aphorism known as the Thomas theorem – the idea was bright and shapely enough: “If men define situations as real, they are real in their consequences.” In a culture that regards the ghosts of dead ancestors as full members of the family, it’s necessary to take appropriate actions not to offend them; they will have a place at the table. Arguments about the socially constructed nature of reality generalize the Thomas theorem more broadly: we have access to the world only through the beliefs, concepts, categories, and patterns of behavior established by the society in which we live.
The idea lends itself to caricature, of course, particularly when it comes to discussion of the socially constructed nature of something brute and immune to argumentation like, say, the force of gravity. “Social constructivists think it’s just an idea in your head,” say the wits. “Maybe they should prove it by stepping off a tall building!”
Fortunately the experiment is not often performed. The counterargument from gravity is hardly so airtight as its makers like to think, however. The Thomas theorem holds that imaginary causes can have real effects, But that hardly implies that reality is just a product of the imagination.
And as for gravity -- yes, of course it is “constructed.” The observation that things fall to the ground is several orders of abstraction less than a scientific concept. Newton’s development of the inverse square law of attraction, its confirmation by experiment, and the idea’s diffusion among the non-scientific public – these all involved institutions and processes that are ultimately social in nature.
Isn’t that obvious? So it seems to me. But it also means that everything counts as socially constructed, if seen from a certain angle, which may not count as a contribution to knowledge.
A new book from Temple University Press, Darin Weinberg’s Contemporary Social Constructionism: Key Themes, struggles valiantly to defend the idea from its sillier manifestations and its more inane caricatures. The author is a reader in sociology and fellow at King’s College, University of Cambridge. “While it is certainly true that a handful of the more extravagant and intellectually careless writers associated with constructionism have abandoned the idea of using empirical evidence to resolve debates,” he writes, not naming any names but manifestly glaring at people over in the humanities, “they are a small and shrinking minority.”
Good social constructionist work, he insists, “is best understood as a variety of empirically grounded social scientific research,” which by “turn[ing] from putatively universal standards to the systematic scrutiny of the local standards undergirding specific research agendas” enables the forcing of “the tools necessary for discerning and fostering epistemic progress.”
The due epistemic diligence of the social scientists renders them utterly distinct from the postmodernists and deconstructionists, who, by Weinberg's reckoning, have done great damage to social constructionism’s credit rating. “While they may encourage more historically and politically sensitive intuitions regarding the production of literature,” he allows, “they are considerably less helpful when it comes to designing, implementing, and debating the merits of empirically grounded social scientific research projects.”
And that is being nice about it. A few pages later, Weinberg pronounces anathema upon the non-social scientific social-constructionists. They are “at best pseudo-empirical and, at worst, overtly opposed to the notion that empirical evidence might be used to improve our understanding of the world or resolve disputes about worldly events.”
Such hearty enthusiasm for throwing his humanistic colleagues under the bus is difficult to gainsay, even when one doubts that a theoretical approach to art or literature also needs to be “helpful when it comes to designing, implementing, and debating the merits of empirically grounded social scientific research projects.” Such criticisms are not meant to be definitive of Weinberg’s project. A sentence like “Derrida sought to use ‘deconstruction’ to demonstrate how specific readings of texts require specific contextualizations of them” is evidence chiefly of the author’s willingness to hazard a guess.
The book’s central concern, rather, is to defend what Weinberg calls “the social constructionist ethos” as the truest and most forthright contemporary manifestation of sociology’s confidence in its own disciplinary status. As such, it stresses “the crucially important emphases” that Weinberg sees as implicit in the concept of the social – emphases “on shared human endeavor, on relation over isolation, on process over stasis, and on collective over individual, as well as the monumental epistemic value of showing just how deeply influenced we are by the various sociohistorical contexts in which we live and are sustained.”
But this positive program is rarely in evidence so much as Weinberg’s effort to close off “the social” as something that must not and cannot be determined by anything outside itself – the biological, psychological, economic, or ecological domains, for example. “The social” becomes a kind of demiurge: constituting the world, then somehow transcending its manifestations.
It left this reader with the sense of witnessing a disciplinary turf war, extended to almost cosmological dimensions. The idea of social construction is a big one, for sure. But even an XXL can only be stretched just so far before it turns baggy and formless -- and stays that way for good.
While looking around for scholarship on witchcraft trials just the other day (not in connection with current events, though with American politics you never know) I stumbled across Crime: A Batch From The Journal of Interdisciplinary History, an ebook in a new series from MIT Press that launched in May.
It reprints Edward Bever’s "Witchcraft Prosecutions and the Decline of Magic” from 2009, plus nine other articles on crime across the past few centuries. As far as I can tell, Batches is the first of its kind, at least in ebook format: a series of thematic anthologies drawn from the back files of scholarly journals that MIT publishes. Two other titles have appeared so far, Spies: A Batch from the Journal of Cold War Studies and The United States and China: A Batch from International Security. They’re all in an attractive and sensibly designed format, at the modest price of $6.99. (If an ebook series along the same lines as Batches does exist, I'll undoubtedly hear about it, and in that case will update this column with the pertinent information.)
Describing so ethereal an artifact as “attractive” may sound strange, but plenty of titles coming across my ereader have been real eyesores. (Both unnavigable and un-proofread, they've seemed overpriced even when free.) The MIT volumes have functional tables of contents, available both at the start of the file and via drop-down menu. Not only are there links between the text and endnotes but they work in both directions.
That is something you ought to be able to take for granted, but can't. No major investment of resources is required — just a little attention to detail when preparing the text. It’s time for readers to become a lot more aggressive about demanding adequate production values from the ebooks they bring out.
That's not to say that publishing volumes of papers selected from a specific journal is a new idea, even in digital format. For example, there is Classics from IJGIS: Twenty years of the International Journal of Geographical Information Science and Systems and ISO Science Legacy: Reprinted from "Space Science Reviews" Journal, V.
Apart from being specialized and quite expensive -- even by hardback standards, let alone for an ebook — they differ from the Batches collections by being stand-alone works, rather than part of a series. The title of SO Science Legacy: Reprinted from "Space Science Reviews" Journal, V would seem to imply that volumes I-IV are available to download by anyone with lots and lots of money, though in fact there’s just the one.)
And of course material from JSTOR and other repositories can be stored and read on a handheld device. Such material is almost always in PDF, however, which has limited flexibility compared to text in an ereader-specific format (e.g., mobi or epub). The latter allow the user to adjust the size of the type, and in my experience the option to highlight articles in PDF is luck of the draw, while that is less of a problem in the other formats.
A small but expanding array of scholarly periodicals now appear in ebook editions, including the American Academy of Arts and Sciences flagship Daedalus and the American Economic Association’s quarterly Journal of Economic Perspectives. Likewise with a number of law reviews, including many of the most prominent ones. Diverse as these journals are, they all routinely publish material of potential interest to non-specialist readers. Selling individual issues online gets the journal in front of a wide public without the hazards of newsstand distribution.
The new series from MIT is a synthesis of all the developments just listed — and, in some regards, an improvement on them. While reading around in the debut volumes, I was impressed both by the range of issues covered in each volume and by how well the selections complemented one another. For that, too, cannot be taken for granted. Collections of scholarly papers are often forced together rather than edited, much less integrated into a cohesive volume. Reading one is like attending a shotgun polygamous marriage among strangers, albeit not so memorable.
Jill Rodgers, marketing manager for MIT Press, made time to respond to my questions about the series by email. Her answers went some way toward explaining why the collections hang together better than compilations often do.
For one thing, the press monitors how its journals are being used. "We have access to traditional reports like article downloads and citations,” Rodgers told me. "Using Google Analytics and Altmetric.com, we also get a lot of information about what sites are bringing traffic to our website, who’s talking about our articles on blogs and in media outlets, how many people are bookmarking articles in Mendeley, who’s sharing abstracts via Twitter, etc.”
The possibility of using all that data to brainstorm ideas for ereader collections came up during a retreat late last year. The gestation time for the series was just six months.
"To create a Batch,” she said, "we first identify an article or topic that is getting a lot of play. We move to our archives and do some searching to see if we have enough content ... then reach out to the journal editor to see if he/she agrees the topic is skillfully covered by the journal and is willing to curate a final [table of contents].” Ideally the collection will include 6 to 10 papers; the titles now available contain 10 each.
While the marketing department’s data generated the topics, the collections' salience comes from the work of the journal editors who, "besides weighing the hundreds or thousands of articles available, will also compose an introduction” that explains "the impact of the articles within the field and their importance to the journal.”
The collection then goes into the digital production pipeline. “The first round of three Batches took 3-4 months from proposal to loading on Amazon,” Rodgers noted, "but I think that time period will shorten now that we’ve got the hang of it.”
Three more collections are nearly ready to go -- although they aren’t yet listed by online vendors, nor has any other information about them appeared. In other words, you read it here first. They are Gender and Sexuality: A Batch from TDR. (i.e., The Drama Review), Broadening the Domain of Grammar: A Batch from Linguistic Inquiry, and Responding to Terrorism: A Batch from International Security.
Rodgers indicated that the press has "another half dozen or so 'half-baked batches' that are in various stages.” She and her colleagues are now "also talking about taking requests for new Batches from readers.”
Other university presses are bound to follow MIT’s lead. For one thing, there is the appeal of being able to make use of material already accumulated by the publisher in its stable of journals. A proposal that involves getting content out of the digital warehouse and into revenue-generating circulation seems likely to enjoy the benefit of the doubt. But presses following the model of the new series really should mimic its standards as well.
And if they don’t…. well, let’s take up that topic later, in another column.