Several major publishers will experiment with offering free course materials to Coursera users enrolled in the Silicon Valley-based company's massive open online courses. The partnership, which involves Cengage Learning, Macmillan Higher Education, Oxford University Press, SAGE, and Wiley will deliver material using Chegg, a company that offers an e-book platform. According to Coursera, while professors teaching MOOCs on its platform have been able to assign free high-quality content, they will now be able to work with publishers to "provide an even wider variety of carefully curated teaching and learning materials at no cost to the student." Coursera has, however, generated some revenue from the Amazon.com affiliates program wherein users buy books suggested by professors.
OpenStax College, the year-old Rice University startup that produces free online textbooks, will more than double the number of fields in which it has titles by 2015, the university announced today. A grant from the Laura and John Arnold Foundation will allow OpenStax College to add to its current offerings in physics and sociology, and its two new biology books and an introductory anatomy text coming out this fall. The new titles will be in precalculus, chemistry, economics, U.S. history, psychology and statistics, Rice said, toward its goal of producing high-quality open-source books in the 25 most-enrolled college courses. OpenStax says its existing two texts have been downloaded more than 70,000 times so far.
Students at the state of Washington's 34 community and technical colleges will save hundreds of thousands of dollars a year because of low-cost textbooks produced by the state's Open Course Library, the college system said this week. The library, which received funding from the state legislature and the Bill & Melinda Gates Foundation, spent $1.8 million to develop low-cost course material, including textbooks of no more than $30, for 81 common courses. The effort has already saved students $5.5 million since fall 2011, according to an analysis by The Student Public Interest Research Groups, an advocacy organization.
“Students are clearly the winners in the open courseware library model,” said Marty Brown, the executive director of the State Board for Community and Technical Colleges, in a conference call with reporters.
Nicole Allen, a textbook advocate for the student group, said Washington's materials are used outside the state, including by a math department in Arizona. Policymakers in California and British Columbia have created similar projects, she said.
“A poem,” wrote William Carlos Williams toward the end of World War II, “is a small (or large) machine of words.” I’ve long wondered if the good doctor -- Williams was a general practitioner in New Jersey who did much of his writing between appointments – might have come up with this definition out of weariness with the flesh and all its frailties. Traditional metaphors about “organic” literary form usually imply a healthy and developing organism, not one infirm and prone to messes. The poetic mechanism is, in Williams’s vision, “pruned to a perfect economy,” and there is “nothing sentimental about a machine.”
Built for efficiency, built to last. The image this evoked 70 years ago was probably that of an engine, clock, or typewriter. Today it’s more likely to be something with printed circuits. And a lot of poems in literary magazines now seem true to form in that respect: The reader has little idea how they work or what they do, but the circuitry looks intricate, and one assumes it is to some purpose.
I had much the same response to the literary scholarship Matthew L. Jockers describes and practices in Macroanalysis: Digital Methods & Literary History (University of Illinois Press). Jockers is an assistant professor of English at the University of Nebraska at Lincoln. The literary material he handles is prose fiction -- mostly British, Irish, and American novels of the 18th and 19th centuries -- rather than poetry, although some critics apply the word “poem” to any literary artifact. In the approach Jockers calls “macroanalysis,” the anti-sentimental and technophile attitude toward literature defines how scholars understand the literary field, rather than how authors imagine it. The effect, in either case, is both tough-minded and enigmatic.
FollowingFranco Moretti’s program for extending literary history beyond the terrain defined by the relatively small number of works that remain in print over the decades and centuries, Macroanalysis describes “how a new method of studying large collections of digital material can help us to understand and contextualize the individual works within those collections.”
Instead of using computer-based tools to annotate or otherwise explore a single work or author, Jockers looks for verbal patterns across very large reservoirs of text, including novels that have long since been forgotten. The author notes that only “2.3 percent of the books published in the U.S. between 1927 and 1946 are still in print” (even that figure sounds high, and may be inflated by the recent efforts of shady print-on-demand “publishers” playing fast and loose with copyright) while the most expansive list of canonical 19th-century British novels would represent well under 1 percent of those published.
Collections such as the Internet Archive and HathiTrust Digital Library available for analysis. Add to this the capacity to analyze the metadata about when and where the books were published, as well as available information on the authors, and you have a new, turbocharged sort of philology – one covering wider swaths of literature than even the most diligent and asocial researcher could ever read.
Or would ever want to, for that matter. Whole careers have been built on rescuing “unjustly neglected” authors, of course, but oblivion is sometimes the rightful outcome of history and a mercy for everyone involved. At the same time, the accumulation of long-unread books is something like a literary equivalent of the kitchen middens that archeologists occasionally dig up – the communal dumps, full of leftovers and garbage and broken or outdated household items. The composition of what’s been discarded and the various strata of it reveal aspects of everyday life of long ago.
Jockers uses his digital tools to analyze novels by, essentially, crunching them -- determining what words appear in each book, tabulating the frequency with which they are used, likewise quantifying the punctuation marks, and working out patterns among the results according to the novel’s subgenre or publication date, or biographical data about the author such as gender, nationality, and regional origin.
The findings that the author reports tend to be of a very precise and delimited sort. The words like, young, and little “are overrepresented in Bildungsroman novels compared to the other genres in the test data.” There is a “high incidence of locative prepositions” (over, under, within, etc.) in Gothic fiction, which may be “a direct result of the genre’s being ‘place oriented.’” That sounds credible, since Gothic characters tend to find themselves moving around in dark rooms within ruined castles with secret passageways and whatnot.
After about 1900, Irish-American authors west of the Mississippi began writing more fiction than their relations on the other side of the river, despite their numbers being fewer and thinner on the ground. Irish-American literature is Jockers’s specialty, and so this statistically demonstrable trend proves of interest given that “the history of Irish-American literature has had a decidedly eastern bias…. Such neglect is surprising given the critical attention that the Irish in the West have received from American and Irish historians.”
As the familiar refrain goes: More research is needed.
Macroanalysis is really a showcase for the range and the potential of what the author calls “big data” literary study, more than it is a report on its discoveries. And his larger claim for this broad-sweep combination of lexometric and demographic correlation-hunting – what Moretti calls “distant reading” -- is that it can help frame new questions about style, thematics, and influence that can be pursued through more traditional varieties of close reading.
And he’s probably right about that, particularly if the toolkit includes methods for identifying and comparing semantic and narrative elements across huge quantities of text. (Or rather, when it includes them, since that’s undoubtedly a matter of time.)
Text-crunching methodologies offer the possibility of establishing verifiable, quantifiable, exact results in a field where, otherwise, everything is interpretive, hence interminably disputable. This sounds either promising or menacing. What will be more interesting, if we ever get it, is technology that can recognize and understand a metaphor and follow its implications beyond the simplest level of analogy. A device capable of, say, reading Williams’s line about the poem as machine and then suggesting something interesting about it – or formulating a question about what it means.
The carnage and manhunt in Boston last week obliged the Digital Public Library of America to postpone its grand opening festivities at the Boston Public Library until sometime this fall. So sudden a change of plans could only create a logistical nightmare. The roster of museums, archives, and libraries participating in DPLA runs into the hundreds, and the two-day event (Thursday and Friday) was booked to capacity, with scores of people on the standby list. But the finish line for the marathon was just outside the library, and rescheduling unavoidable.
The delay applied only to the gala, not to DPLA itself: the site launched on Thursday at noon, E.S.T., right on schedule. The response online has been, for the most part, enthusiasm just short of euphoria. The collection contains not quite 2.4 million digital “objects,” including books, manuscripts, photographs, recorded sound, and film/video. More impressive than the quantity of material, though, is how much thought has gone into how it’s made available.
That’s true even of the site’s address: DP.LA. I’ve seen at least one grumble about how anomalous this looks. Which it does, but in a good way. Even if you forget the address, it takes no effort to reconstruct. The brevity of the URL makes it convenient to type on a cellphone; when you do, the site’s homepage is readily navigable on the small screen. That demonstrates an awareness of how a good many visitors will actually use the site – more so than is often the case with library catalogs online.
DPLA is the work of people who understand that design is not just icing on the digital cake, but a significant (even decisive) factor in how we engage with content in the first place. They have made available an application program interface (API) for the site, which is a very useful thing indeed, according to my source in the geek community. With the API, users can create new tools for sorting and presenting the library’s materials. Combine it with a geolocation API, for example, and you could put together an application displaying the available photographs of the street you are on, organized decade by decade.
The library’s potential for assembling and integrating an incredible range of documents and knowledge is almost unimaginable. Excitement seems appropriate. But in describing my own impressions of DPLA, I want to be a little more qualified about the enthusiasm it inspires. Things are not nearly as far along as some comments have implied. This isn’t just naysaying. The site is currently in its beta version, and many of my points will probably be nullified in due course. But it’s better to be aware of some of the limitations beforehand than to visit the site expecting a digital Library of Alexandria.
One thing to keep in mind is that DPLA is not so much a library as an enormous card catalog, with the “shelves” of books, photographs, and so forth being the digital collections of libraries and historical societies, large and small, all over the country. The range of material offered through the Digital Public Library of America reflects what people running the local collections have decided to digitize and make available. What DPLA gathers and makes searchable is the metadata: descriptions of what a document contains (its subject, origins, copyright status, and so on) and of its characteristics as a digital object (size and file type).
The DPLA “card” gives the available information about an item, often accompanied by a thumbnail image of the book cover, manuscript, etc. – along with a link taking you to the digital repository in which it appears. DPLA puts the metadata into a standard format. But much of the content-description will inevitably be done by local librarians and archivists, making for a considerable range in detail. Often the DPLA entry will provide a bare minimum of description, though some entries run to a paragraph or two.
But the entry is only as strong as its link. It seemed appropriate to make one of my earliest searches at the Digital Public Library for the quintessential American poet Walt Whitman. There were 52 hits, with 9 of the top 10 being manuscripts of his letters in the Department of Justice collection at the National Archives. Not one of the links for the letters worked. By contrast, I had no trouble getting access to photographs of the poet held by the Smithsonian Institution.
This proved par for the course. Most links worked -- but out of two dozen entries for items in National Archives, only one did. It’s hardly surprising (gremlins have a strong work ethic), but it shows the need for troubleshooting. Users of the library can be expected to point out such glitches, if encouraged to do so. It might be worth adding a widget that would appear in each record allowing users to flag an inoperative link, a typographical error, or some problem with the content description. It's true that the site has a contact page, but people are more likely to report errors if they are encouraged to do so.
Continued thumbing through the catalog demonstrated how early a stage DPLA is in accumulating its collection – and how much fine-tuning its search engine may need.
Entering “Benjamin Franklin,” you get more than 1,400 results. Out of the first 30, all but 3 are documents (usually death certificates) for people named after the inventor and statesman. A toolbar on the left allows the user to refine the search in various ways – but the most useful filter, by subject, is at the very bottom and easy to overlook.
It was encouraging to get 17 results when searching for Phyllis Wheatley, the first published African-American poet, but 15 of them led to records from the 1940 census, by which point she had been dead the better part of 150 years. Only one of the other two was at all germane to her as historical figure. The other concerned an Atlanta branch of the Young Women’s Christian Association named in her honor.
I expected to locate just a few things about the Southern Tenant Farmers Union of the 1930s, but in fact got no hits at all. At the other extreme, DPLA has records for more than 90 items pertaining to the Ku Klux Klan – photographs, handbills, and cartoons, both pro- and anti-. Quite likely these were among the most striking and attention-grabbing items in various collections, and were digitized for use in print publications and online. It's concrete evidence that the Digital Public Library of America's offerings will be only as representative as the decisions made by the contributing institutions.
A number of foundations and government agencies have lent their support to DPLA, and its progress towards incorporation as a 501(c)3 organization should make it an even more appealing destination for the big philanthropic bucks. But important as funding certainly is for the library’s future, what it will ultimately be decisive for its success is a massive infusion of intellectual capital. Some of it will come from code writers hacking out new applications using the library's metadata and API. More than that, though, DPLA will need to encourage the participation and the expertise of people using the site. It's an impressive foundation and scaffold, but it's up to scholars, librarians, and other knowledgeable citizens to build the library, from the ground up.
While in Iceland a few weeks ago, I tried to work up the nerve to try hàkarl, one of the local delicacies. It is prepared by burying chunks of Greenland shark meat in the ground for a few weeks so that it can “ferment” (the nicest word possible here), after which it is unearthed and kept in a smokehouse until extra tasty. The smell goes on for miles. One tourist compared it to “a tramp’s socks soaked in urine.” The flavor, everyone says, is not nearly as bad as the aroma, although the description alone is sufficient to tickle the gag reflex.
My suspicion was that the hàkarl tradition began with one drunken Viking daring another to eat putrid seafood. On first arriving in Reykjavik, I felt up to the challenge, if only because clogged sinuses had deprived me of smell and taste for a week. But Iceland has the world’s cleanest air, and after breathing it for a couple of days, my senses were functional, even keen … and I flinched. My wife was unable to contribute to the YouTube subgenre of “tourist eats hàkarl” videos. Clearly no Viking blood flows in these veins.
Be that as it may, I could tell, by the time we left, that the city itself had a smell -- distinctive and unpleasant, like rotting eggs perhaps. It came from the sulfurous fumes emitted by Iceland’s geothermal springs. The island sits atop (or rather, was created by) the ridge or boundary between the North American and Eurasian tectonic plates. As they move apart from each other, heat surges up from beneath the earth’s crust – hence the volcanoes, and frequent minor earthquakes, as well as water so hot you can boil an egg in it.
Returning to my desk a few days later, I found, atop the clutter, a book showing the glowing mouth of a volcano on its cover, with the title Why Hell Stinks of Sulfur: Mythology and Geology of the Underworld by Salomon Kroonenberg (Reaktion Books, distributed by the University of Chicago Press). The topic had sounded intriguing before our trip. Now the title itself promised to keep the vacation mood alive just a little while longer.
Kroonenberg is emeritus professor of geology at the University of Delft, in the Netherlands – and Why Hell Stinks of Sulfur seems very much a professor emeritus’s book. It blurs the line between memoir (or travelogue) and a pedagogically compelling exposition of the author’s field. And the author wanders across that field guided by his own sui generis map. As far as I know, “the underworld” is not a contemporary geological concept. It leads him across an enormous and various range of ancient and modern literature concerning the world beneath our feet. The earth sciences become part of the humanities, and vice versa. The treatment is essayistic (comparisons to Stephen Jay Gould come to mind occasionally) yet the volume coheres as a whole. It is the work of someone who knows not only his subject but the history and sources of his own mind. This might be called full intellectual maturity, a ripening; the quality is rare.
Kroonenberg’s point of departure is the contrast between the crystalline transparency of the heavens above -- where we can see objects billions of miles away, weather permitting – and the opaque universe beneath our feet: “the most unknown part of our planet, despite the fact that the center of the Earth is no further away than London is from Chicago.” The deepest location most of us will occupy for very long is six feet under, and not out of curiosity. The association of the underworld with darkness and death comes naturally, and the heroes who travel in its realms (Orpheus, Anaes, Dante, etc.) usually go there because they’ve lost someone. In the case of Jesus – according to the pseudepigraphal Gospel of Nicodemus – the trip is a mission to subdue Satan and lead “Adam and all the saints out of hell.”
Following the geographical references and descriptions in these and other accounts of subterranean visits, Kroonenberg goes in search of the real-world sources of how the underworld has been depicted – the original river that becomes mythologized as Styx, for example. Kroonenberg is often pursuing suggestions left by explorers and scholars over the centuries. Their lives and writings become part of the narrative, along with the author’s own travels and his explanations of geological phenomena.
The author moves through history in a crablike fashion -- from ancient and medieval stories to recent knowledge of the earth’s structure, but never through the shortest possible route of compare-and-contrast exposition. Alongside the fictional or legendary figures, more and more historical figures appear as the chapters proceed, bringing their speculations on stage. (Leonardo da Vinci drew a reasonably good cross-section of the Earth, showing the layers beneath its crust. René Descartes “was the first to assume that the core of the Earth was hot.”)
All the while, Kroonenberg’s personal recollections weave in and out of the text, from his childhood scientific interests to aspects his career, such a collaborating with specialists in pedology, the study of dirt, while working at an agricultural school:
“Experienced soil surveyors roll a small clump of soil into a sausage and stick it into their mouths, chew it, ponder it for a while, and then pronounce their verdict: light loam, 15 percent clay. And then it starts, as there is always more than one pedologist around the pit at any one time. They take turns to jump in, pick at the wall, and taste the soil: ‘I think it’s heavy loam, 20 percent clay.’ … There are endless discussions on the basis of qualitative, subjective observations, where the lack of statistical evidence is compensated for by years of experience in hundreds of pits. Bullshit around the pit, that’s what we call it.”
Not surprisingly, one of the author’s favorite books as a child was Journey to the Center of the Earth, by Jules Verne, in which the narrator and his uncle, Professor Lidenbrock, gain access to the subterranean world through a volcano in Iceland. Kroonenberg recalls that the uncle was “an irritable and conceited scholar who gave a not particularly popular course on mineralogy” and gathered knowledge “for himself and not for others.”
In the Scholastic Books edition of Verne’s novel that I read constantly as a kid, that little bit of characterization was left out. Perhaps the editors worried that it was too unflattering a picture of a teacher. In any case, Kroonenberg himself is nothing like the uncle, if his book is anything to go by. It makes me want to go back to Iceland to have a look at that volcano. Next time, I might also sample a bit of hàkarl, or, failing that, chew some dirt.
The Digital Public Library of America, an online repository of the nation's historical and cultural riches, will launch as scheduled tomorrow, though its formal opening event has been canceled by Monday's attacks in Boston, the project's director announced Tuesday. In the statement, Dan Cohen noted that the bombings took place in close proximity to the Boston Public Library, where the opening event was to be held. (That is also right near the finish line of the Boston Marathon, the target of the attacks.) The fact that the area near the library has been closed, and the need for the library's staff members, "like so many other honorable public servants in Boston, ... to be there for the surrounding community first," make canceling the event the obvious choice, he said. A larger event will be held in the fall.
But "[t]he new DPLA site will still go live at noon ET on Thursday as planned, and we look forward to sharing the riches of America’s libraries, archives, and museums. Although we have canceled all of the formal events, DPLA staff will be available all day online, and informally in person in the late afternoon in the Boston area (at a location to be determined), for those taking their first look," Cohen said. "I see the building of a new library as one of the greatest examples of what humans can do together to extend the light against the darkness. In due time, we will let that light shine through."