A couple of months ago I interrupted several years of procrastination and finally got around to a time-consuming bit of housework: unpacking each volume from every shelf in my library, flipping it (the shelf that is), and then putting the books into a more orderly state than they had been in a long time. It was the work of several days. The shelves are thick and sturdy, but they had borne two rows of books, plus whatever could be fitted in horizontally, for more than a decade. With a dozen tiers to process -- at eight shelves per tier, and 25 to 50 volumes per shelf -- I had an incentive to build up all the mindless, robotic momentum possible. Stopping to read anything was strictly forbidden, for all the good that did.
They were, and still are, organized alphabetically by author’s name. Friends occasionally express dismay at this. It seems the most impersonal system possible short of arranging them by color. But putting them back on the shelf -- after sifting and sorting them, and a lot of dusting -- proved anything but impersonal. It was comparable to reading an old journal – that is, an experience of numbing repetitiveness, interrupted by melancholy and embarrassment. Several volumes have inscriptions from friends who have died. My copy of the collected Edgar Allan Poe was a Christmas present from pre-adolescence, when bookplates evidently struck me as the height of sophistication. What is stranger -- the extensive academic literature concerning UFO-based religions, or the fact that I seem to own all of it?
Memory kept sabotaging discipline. It’s amazing the job ever got done. The same cannot be said of winnowing through a couple of filing cabinets loaded with photocopies, a week or so later, which involved no more complex sentiment than a kind of satisfying ruthlessness. And with digital text, you don’t even get that. Every so often I copy all the e-books and article PDFs from the laptop to a flashdrive, which then gets dropped into a coffee cup on my desk, along with the others. Sorting and purging the e-library hardly seems worth the effort. Any item in it can be located and extracted within a few minutes. I have no fond memory of acquiring any of them. Downloading a book from Amazon must be consumerism at its most disenchanted. For that matter, thinking back on the e-books I’ve read, what comes to mind is almost always information, rather than the experience of reading them.
Andrew Piper’sBook Was There: Reading in Electronic Times (University of Chicago Press) occupies a niche somewhere between a couple of fields of study that were already interdisciplinary. One is the history of the book, from scroll to e-reader. The other is a phenomenological psychology of reading – an effort, that is, to describe the concrete experience of engaging with the written word, which involves more than the sense of sight, or even the neural processes that somehow convert squiggles into meaning.
“Books have been important to us,” Piper writes in a passage that made me glad to have read him, “because of the way our interactions with them span several domains of sensory and physical experience. Whether it is through the acts of touch, sight, sound, sharing, or acquiring a sense of place, [our] embodied, and at times impersonal, ways of interacting with books coalesce to magnify the learning that takes place through them. The same information processed in different ways and woven together is one of the profound secrets of bookish thought."
Piper, an associate professor of languages, literature, and culture at McGill University, in Montreal, won the Modern Language Association Prize for a first book Dreaming in Books: The Making of the Bibliographic Imagination in the Romantic Age (2009), also published by the University of Chicago Press; and his paper “Rethinking the Print Object: Goethe and the Book of Everything” (2006) received the Goethe Society of North America’s annual essay prize. While no less grounded in European cultural history than his earlier work, Book Was There (its title taken from Gertrude Stein) is more digressive and memoiristic. Parenthood supplements scholarship: one of his children learned to read as Piper was writing the book; his reflections on reading as an aspect of self-fashioning are at least partly grounded in family life.
His intent is not -- as the subtitle “Reading in Electronic Times” might suggest -- a screed against the e-text flood. Book Was There shows a wide knowledge of contemporary digital art and literature, and Piper makes a brief mention of his role in a collaborative project to create a computer model of the impact of The Sorrows of Young Werther on subsequent literature. Like anyone who has given the matter more than a soundbite’s worth of thought, he recognizes that the relationship between the cultural system now emerging and the previous thousand years of human civilization involves both continuities and disruptions, for both better and worse.
What Piper does insist on is the specificity of how we interact with text when incarnated as the artifact of the three-dimensional book. This begins with the hand, which navigates through a bound volume in a way distinct from the turning of a scroll or the button-punching we do on Kindle or Nook. (For one thing, the ancient and the digital format resemble each other at least as much as either does the codex or a book from the Gutenberg era.) One of Piper’s core ideas, radiating out in several directions throughout the book, is that the sense of touch creates “a form of redundancy” in the overall experience of reading, “enfolding more sensory information into what we see and therefore what we read.”
Someone who has lived closely with a given volume for a long time will have a sense of what Piper means. Even without bookmarks or notes, you often know how to look up something in it fairly quickly. The citation is also a location which you can (literally) feel your way to finding.
But there is more to it than that. “As early as the twelfth century,” Piper notes, “writers began drawing hands in the margins of their books to point to important passages. Such a device gradually passed into typescript and became a commonplace of printed books.” And in that regard it mimicked the book’s own role in pointing to specific aspects of the world – making them, in a hand-related analogy, “graspable.” (The implicit contrast here would be the link, which serves as another way to direct the reader’s eyes, but with the constant risk of diffusing attention instead of directing it.)
Handwriting is another mode of engagement with text -- one considerably less efficient than today’s cut-and-paste norm, as Piper acknowledges, but of value precisely for its slowness, which permits incorporation of meaning rather than the aggregation of content. He cites research showing a significant relationship between writing by hand and drawing:
“Early elementary school students who draw before they write tend to produce more words and more complex sentences than those who do not. And as historians of writing have shown, writing makes drawing more analytical. It allows for more complex visual structures and relations to emerge. As Goethe remarked, word and image, drawing and writing, are correlates that eternally search for one another. Handwriting is an integral means of their convergence.”
Cyberculture is all about convergence, if not in Goethean terms. It overloads the reader’s sensorium through every conceivable channel of communication (preferably at the same time) while bombarding us with invitations to respond to its messages, right away. “Interactivity is a constraint,” Piper writes, “not a freedom.” As if glossing his point, a satirical article in The Onion recently reported that Internet users had had enough. “Nobody needs to get my immediate take on everything I see online,” it quotes one woman as saying. “…. At best I’m just going to parrot back some loose approximation of what I’ve heard before, which will just prove that I never should have weighed in in the first place.”
Piper suggests that reading of the older sort provokes a kind of anxiety in the culture now because of its seeming isolation and inertia – so out of step with the drive for connectivity, quantifiable impact, and a rapid turnover in goods and services. But -- to continue his point -- the physical inertia tends to generate a much more intense internal dynamism, making for more complex and lasting patterns of meaning.
That sounds right. Given the limits of space, my acquisition of hardbacks and paperbacks must slow down; at this point, the ones on hand are saturated enough with significance to last the rest of my days. But the e-texts filling my coffee cup can accumulate as rapidly as ever. No shelf bends under the weight, and their imprint on my memory is like footprints in the snow.
Submitted by Rob Weir on January 22, 2013 - 3:00am
Stewart Brand is credited with coining the phrase "information wants to be free." In the wake of the suicide of 26-year-old cyber activist Aaron Swartz, we need to re-evaluate that assumption.
Brand, the former editor of The Whole Earth Catalog and a technology early adopter, is a living link between two great surges in what has been labeled "the culture of free": the 1967 Summer of Love and the Age of Information that went supernova in the late 1990s. Each period has stretched the definition of "free."
During the Summer of Love, the Diggers Collective tried to build a money-free enclave in San Francisco’s Haight-Ashbury district. They ran "free" soup kitchens, stores, clinics and concerts. Myth records this as a noble effort that ran aground; history reveals less lofty realities. "Free" was in the eye of the beholder. The Diggers accumulated much of the food, clothing, medicine, and electronic equipment it redistributed by shaking down local merchants like longhaired mob muscle. Local merchants viewed Digger "donations" as a cost of doing business analogous to lost revenue from shoplifting. Somebody paid for the goods; it just wasn’t the Diggers or their clients.
Move the clock forward. Aaron Swartz’s martyr status crystallizes as I type. As the legend grows, Swartz was a brilliant and idealistic young man who dropped out of Stanford and liberated information for the masses until swatted down by multinational corporations, elitist universities, and the government. Faced with the potential of spending decades behind bars for charges related to hacking into JSTOR, a depressed Swartz committed suicide. (In truth, as The Boston Globe has reported, a plea bargain was nearly in place for a four-to-six-month sentence.)
I am sorry that Swartz died, and couldn’t begin to say whether he was chronically depressed, or if his legal woes pushed him over the edge. I do assert, though, that he was no hero. The appropriate label is one he once proudly carried: hacker. Hacking, no matter how principled, is a form of theft.
It’s easy to trivialize what Swartz did because it was just a database of academic articles. I wonder if his supporters would have felt as charitable if he had "freed" bank deposits. His was not an innocent act. The Massachusetts Institute of Technology and the Commonwealth of Massachusetts took the not-unreasonable position that there is a considerable difference between downloading articles from free accounts registered with a university, and purloining 4.8 million documents by splicing into wiring accessed via unauthorized entry into a computer closet. That’s hacking in my book – the moral equivalent of diverting a bank teller with a small transaction whilst a partner ducks behind the counter and liberates the till.
Brand and his contemporaries often parse the definition of free. Taking down barriers and making data easier to exchange is “freeing” in that changing technology makes access broader and cheaper to deliver. Alas, many young people don’t distinguish between "freeing" and "free." Many of my undergrads think nearly all information should come at no cost – free online education, free movies, free music, free software, free video games…. Many justify this as Swartz did: that the value of ideas and culture is artificially inflated by info robber barons.
They’re happy to out the villains: entrenched university administrations, Hollywood producers, Netflix, the Big Three record labels, Amazon, Microsoft, Nintendo, Sega…. I recently had a student pulled from my class and arrested for illegal music downloading. He was considerably less worried than Swartz and pronounced, "I fundamentally don’t believe anyone should ever have to pay for music." This, mind you, after I shared tales of folk musicians and independent artists that can’t live by their art unless they can sell it.
Sorry, but this mentality is wrong. Equally misguided are those who, like Swartz before his death, seek to scuttle the Stop Online Piracy Act and the Protect Intellectual Property Act. Are these perfect bills? No. Do they protect big corporations, but do little to shelter the proverbial small fish? Yes. Do we need a larger political debate about the way in which conglomeration has stifled innovation and competition? Book me a front-row seat for that donnybrook. Are consumers of everything from music to access to academic articles being price gouged? Probably. But the immediate possibility of living in a world in which everything is literally free is as likely as the discovery of unicorns grazing on the Big Rock Candy Mountain.
Let’s turn to JSTOR, the object of Swartz’s most recent hijinks. (He was a repeat offender.) JSTOR isn’t popular among librarians seeking subscription money, or those called upon to pay for access to an article (which is almost no one with a university account who doesn’t rewire the network). Many wonder why money accrues to those whose only "creation" is to aggregate the labor of others, especially when some form of taxpayer money underwrote many of the articles. That’s a legitimate concern, but defending Swartz’s method elevates vigilantism above the rules of law and reason. More to the point, reckless "liberation" often does more harm than good.
JSTOR charges university libraries a king’s ransom for its services. Still, few libraries could subscribe to JSTOR’s 1,400 journals more cheaply. (Nor do many have the space to store the physical copies.) The institutional costs for top journals are pricey. Go to the Oxford University Website and you’ll find that very few can be secured for under $200 per volume, and several are over $2,000. One must ultimately confront a question ignored by the culture of free: Why does information cost so much?
Short answer: Because journals don’t grow on trees. It’s intoxicating to think that information can be figuratively and literally free, until one assembles an actual journal. I don’t care how you do it; it’s going to cost you.
I’m the associate editor of a very small journal in the academic pond. We still offer print journals, which entails thousands of dollars in printing and mailing costs for each issue. Fine, you say, print is dead. Produce an e-journal. Would that be "free?" Our editor is a full-time academic. She can only put in the hours needed to sift articles, farm them out for expert review, send accepted articles to copy editors, forward copy to a designer, and get the journal to subscribers because her university gives her a course reduction each semester. That’s real money; it costs her department thousands of dollars to replace her courses. Design, copy editing, and advertising fees must be paid, and a few small stipends are doled out. Without violating confidentiality I can attest that even a modest journal is expensive to produce. You can’t just give it away, because subscribers pick up the tab for everything that can’t be bartered.
Could you do this free online with no membership base? Sure – with a team of editors, designers, and Web gurus that don’t want to get paid for the countless hours they will devote to each issue. Do you believe enough in the culture of free to devote your life to uncompensated toil? (Careful: The Diggers don’t operate those free stores anymore.) By the way, if you want anyone to read your journal, you’ll give it to JSTOR or some other aggregator. Unless, of course, you can drum up lots of free advertising.
The way forward in the Age of Information begins with an honest assessment of the hidden costs within the culture of free. I suggest we retire the sexy-but-hollow phrase “information wants to be free" and resurrect this one: "There’s no such thing as a free lunch." And for hackers and info thieves, here’s one from my days as a social worker: "If you can’t do the time, don’t do the crime."
Rob Weir teaches history at Smith College. He is the author of Inside Higher Ed's "Instant Mentor" career advice column.
Michael Barera has been named Wikipedian in residence at the Gerald R. Ford Presidential Library at the University of Michigan -- the first such position at a presidential library. Barera will focus on expanding the availability of information about President Ford and the library's holdings on Wikipedia through the Gerald Ford WikiProject.
I don’t think there’s much more to say about Aaron Swartz. I didn’t know him personally, but like many others I am a beneficiary of the work he did. And I have agreed for much of my life as an academic with the thinking that led him to his fateful act in a closet at the Massachusetts Institute of Technology. Most centrally, that there are several ethical imperatives that should make everything that JSTOR (or any comparable bundling of scholarly publication) holds freely available to everyone: much of that work was underwritten directly or indirectly by public funds, the transformative impact of open access on inequality is already well-documented, and it's in keeping with the obligations and values that scholars allege to be central to their work.
Blame is coming down heavy on MIT and JSTOR, both of which were at pains to distance themselves from the legal persecution of Swartz even before news of his suicide broke, particularly JSTOR, which very early on asked that Swartz not be prosecuted. Blame is coming down even more heavily, as it should, on federal prosecutors who have been spewing a load of spurious garbage about the case for over a year. They had discretion and they abused it grievously in an era when vast webs of destructive and criminal activities have been discretionarily ignored if they stem from powerful men and powerful institutions. They chose to be Inspector Javert, chasing down Swartz over a loaf of bread.
But if we’re talking blame, then there’s a diffuse blame that ought to be conferred. In a way, it’s odd that MIT should have been the bagman for the ancien regime: its online presence and institutional thinking about digitization have otherwise been quite forward-thinking in many respects. If MIT allowed itself to be used by federal prosecutors looking to put an intellectual property head on a pike, that is less an extraordinary gesture by MIT and more a reflection of the academic default.
I’ve been frustrated for years, like other scholars and faculty members who take an interest in these issues, at the remarkable lassitude of academia as a whole toward publication, intellectual property and digitization. Faculty who tell me passionately about their commitment to social justice either are indifferent to these concerns or are sometimes supportive of the old order. They defend the ghastly proposition that universities (and governments) should continue to subsidize the production of scholarship that is then donated to for-profit publishers who then charge high prices to loan that work back to the institutions that subsidized its creation, and the corollary, demanded by those publishers, that the circulation of such work should be limited to those who pay those prices.
Print was expensive, print was specialized, and back in the age of print, what choice did we have? We have a choice now. Everything, everything, about the production of scholarship can be supported by consortial funds within academe. The major added value is provided by scholars, again largely for free, in the work of peer review. We could put the publishers who refuse to be partners in an open world of inquiry out of business tomorrow, and the only cost to academics would be the loss of some names for journals. Every journal we have can just have another name and be essentially the same thing. Every intellectual, every academic, every reader, every curious mind that wants to read scholarly work could be reading it tomorrow if they had access to a basic Internet connection, wherever they are in the world. Which is what we say we want.
I once had a colleague tell me a decade ago that this shift wouldn’t be a positive development because there’s a digital divide, that not everyone has access to digital devices, especially in the developing world. I asked this colleague, whose work is focused on the U.S., if she knew anything about the costs and problems that print imposed on libraries and archives and universities around the world, and of course she didn’t. Digitized scholarship can’t be lost or stolen the way that print can be, it doesn’t have to be mailed, it doesn’t have to have physical storage, it can’t be eaten by termites, it can’t get mold on it. If it were freed from the grasp of the publishers who charge insane prices for it, it could be disseminated for comparatively small costs to any institution or reader who wants access. Collections can be uniformly large everywhere that there’s a connection: what I can read and research, a colleague in Nairobi or Beijing or Moscow or São Paulo can read and research, unless their government (or mine) interferes. That simply couldn’t be in the age of print. Collections can support hundreds or thousands of simultaneous readers rather than just the one who has something checked out. I love the materiality of books, too, but on these kinds of issues, there’s no comparison. And no justification.
The major thing that stands in the way of the potentiality of this change is the passivity of scholars themselves. Aaron Swartz’s action, and its consequences, had as much to do with that generalized indifference as it did with any specific institution or organization. Not all culture needs to be open, and not all intellectual property claims are spurious. But scholarship should be and could be different, and has a claim to difference deep in its alleged values. There should be nothing that stops us from achieving the simplest thing that Swartz was asking of us, right now, in memory of him.
Timothy Burke is professor of history at Swarthmore College.
Between all the fiscal cliff-hanging and the preparations for the inauguration later this month, nobody inside the Beltway is paying much attention to the burgeoning political-science literature on the electoral significance of presidential dog ownership.
Well, official Washington has its priorities, and I have mine. A paper on the topic appears in the January issue of the American Political Science Association’s journalPS: Political Science & Politics. “Burgeoning” is something of an overstatement, but it’s the second time an article on dogs and the presidency has appeared there in a couple of years. So, close enough.
The work of Matthew L. Jacobsmeier and Daniel C. Lewis, two assistant professors of political science at the University of New Orleans, “Barking Up the Wrong Tree: Why Bo Didn’t Fetch Many Votes for Barack Obama in 2012,” is full of statistics and (let its title be fair warning) puns. Their argument builds on the work of Diana C. Mutz (right, I know) whose paper “The Dog That Didn’t Bark: The Role of Canines in the 2008 Campaign” appeared in the October 2010 issue of PS. It would be more accurate to say that Jacobsmeier and Lewis undermine and overturn her analysis, but at least they are friendly about it. (Mutz is a professor of poli sci at the University of Pennsylvania and Princeton University.)
Documentation of the role of pets in the history of the executive branch already existed when Mutz set to work, though it was, for the most part, anecdotal. But she could cite a survey from 2006 indicating that, in local elections at least, not quite 99 percent of dog owners responded that “a candidate’s position or track record on issues such as breed discrimination, breed bans, or leash laws played a significant role in their electoral choice.”
That statistic is at least somewhat questionable, coming as it does from My Dog Votes™, identified as “the world’s only company with a mission of Saving Dogs and Democracy … [by means of] clothing, accessories, and real campaign gear.” And the effect of dog-related issues on voter behavior during national elections remains very much an understudied question. Be that as it may, Mutz ventured a significant interpretation of the 2008 election -- which, while historic, was short of the landslide many expected.
She wrote: “Early in his run for the presidency, Obama made a widely publicized promise to get his daughters a dog after the election, regardless of the outcome. This gesture may have been superficially endearing, as promises go. However, I argue that in the end, this promise backfired on Obama by raising the salience of his family’s doglessness and thus alienating a significant proportion of the population.”
Mutz drew on data collected by the National Annenberg Election Study, a poll that “tracked a large, randomly selected sample of respondents throughout the 2008 presidential campaign.” Pet ownership was one of numerous characteristics recorded in the survey, along with gender, income, educational level, size of household, gun ownership, party identification, and the respondent’s perception of whether the economy was improving or worsening.
The problem was to determine how much weight dog ownership had as a variable affecting voters’ feelings about whether they would be likely to support a candidate. That means taking into account, through regression analysis, the strength of the other factors (gender, income, etc.) and any confluence between them. The results varied across Mutz’s four models, but dog ownership consistently proved to be a negative predictor for an Obama ballot for 1.7 to 5 percent of those surveyed – and among subjects who reported their votes, “the odds decreased by 16 percent if the respondent was a dog owner.”
Mutz offered two possible explanations for this remarkable gap. One was the failure of group identification: “The minimal group paradigm suggests that in-group favoritism can be stimulated even by very weak, transient, and meaningless group identifications.... Whether for symbolic or imputed substantive reasons, group identification theory suggests that, all else being equal, dog owners should be drawn to dog-owning candidates.”
An alternative (not mutually exclusive by any means) was the “congruity-oriented theory” that owners of a particular sort of pet will prefer candidates with similar characteristics, such as “emotional transparency and straightforward displays of emotion” in the case of dogs. That would present difficulties for an altogether feline politician such as Obama.
The scholarship has advanced considerably since the says of Gibbs Davis’s Wackiest White House Pets (2004) and it should come as no surprise to learn that others have revisited Mutz’s data from an alternative perspective. While admiring her analysis as “particularly elegant and compelling,” Jacobsmeier and Lewis challenge it on the basis of “our graduate school experiences [which] included Pavlovian training in the detection of omitted variable bias.”
The omitted variable, in this case, is region. The American Veterinary Medical Association reports that 37.2 percent of American households included a dog in 2006, but they are not evenly distributed. “Rates of dog ownership clearly vary with geographical location,” write Jacobsmeier and Lewis. “Using census region as the geographical unit, dog ownership is most common in the South and least common in the Northeast.”
The data also shows that “a large gap in dog ownership exists between black and white respondents” -- with whites having the higher rates, as do gun owners, home owners, and people living in rural areas. Mutz’s regression models took into account respondents’ party affiliations and how strongly they identified themselves as liberal or conservative, and tried thereby to isolate dog ownership as an independent factor. Instead, it proves to be a kind of proxy for “red state”-ism.
Among the pools of data the authors tapped into while doing their research, evidently, was an exhaustive collection of canine-pertinent verbs, images, sayings, etc., every single one of which was then incorporated into the paper. It seems like something best done with monomaniacal thoroughness, if you’re going to do it at all. I have managed to keep most of them out of this column, but you can find them all -- and many other interesting points scanted here -- in a prepublication copy of the paper here.
After a successful pilot, JSTOR is launching its Register & Read program, which lets anyone read up to three articles from 1,200 of its journals every two weeks in exchange for demographic information.
A new analysis released by the National Bureau of Economic Research (abstract available here) tracks the changes among the five leading economics journals from 1970 to 2012. Among the trends over that time span:
Annual submissions to the top-5 journals nearly doubled.
The total number of articles published declined from 400 per year to 300 per year.
One journal, American Economic Review, now accounts for 40 percent of publications among these five publications, up from 25 percent.
Writing about music, the saying goes, is like dancing about architecture. The implication is that even trying is futile and likely to make the person doing so look absurd.
The line has been attributed to various musicians over the years – wrongly, as it happens, though understandably, given how little of what they do while playing can be communicated in words to people who don’t know their way around an instrument. I don’t know if mathematicians have an equivalent proverb, but the same principle applies. Even more strictly, perhaps, since most nonmathematicians can’t even tell when things go out of tune. And in many of the higher realms, math drifts far from any meaning that could ever be expressed outside whatever latticework of symbols has been improvised for the occasion. (Kind of like if Sun Ra went back to performing on his home planet.)
Against all odds, however, there is good writing about music – as well as The Best Writing on Mathematics 2012, the third anthology that Mircea Pitici has edited for Princeton University Press in as many years. He is working toward a Ph.D. in mathematical education at Cornell University, and teaches math and writing there and at Ithaca College. A majority of the pieces come from journals such as Science, Nature, The Bulletin of the American Mathematical Society and The South African Journal of Philosophy, or from volumes of scholarly papers. But among the outliers is an article from The Huffington Post, as well as a chapter reprinted from an anthology called Dating Philosophy for Everyone: Flirting With the Big Ideas (Wiley-Blackwell, 2011)
There’s also a paper from the proceedings of the Fifth International Meeting of Origami Science, Mathematics, and Education, which opens with the sentence: “The field of origami tessellations has seen explosive growth over the past 20 years.”
Chances are you did not know that. It came as news to me, anyway, and I cannot claim to have followed every step of the presentation, which concerns the algorithms for creating fantastically intricate designs (resembling woven cloth) out of single flat, uncut sheet of paper.
The author, Robert J. Lang, is a retired specialist in lasers and optoelectronics; his standards of numeracy are a few miles above the national average, even if the math he’s using is anything but stratospheric. But Lang is also an internationally exhibited origami artist. The images of his work accompanying the article offer more than proof of what his formulas and diagrams can produce; they are elegant in a way that hints at the satisfaction the math itself must have yielded as he worked it out.
Other papers make similar connections between mathematics and photography, dance, and (of course) music. But one of the themes turning up in various selections throughout the book is the specificity of what could be called mathematical pleasure itself, which can’t really be compared to other kinds of aesthetic experience.
In his essay “Why is There Philosophy of Mathematics at All?" Ian Hacking -- retired from a university professorship at the University of Toronto – considers the hold that math has had on the imagination and arguments of (some) philosophers. Not all have been susceptible, of course. Among humans, “a high degree of linguistic competence is [almost] universally acquired early in life,” the ability “for even modestly creative uses of mathematics is not widely shared among humans, despite our forcing the basics on the young.” And as with the general population, so among philosophers.
But those who have thought carefully about mathematics (e.g., Plato and Husserl) or even contributed to its development (Descartes and Leibniz) share something that Hacking describes this way:
“[T]hey have experienced mathematics and found it passing strange. The mathematics that they have encountered has felt different from other experiences of observing, learning, discovering, or simply ‘finding out.’ This difference is partly because the gold standard for finding out in mathematics is demonstrative proof. Not, of course, any old proof, for the most boring things can be proven in the most boring ways. I mean proofs that deploy ideas in unexpected ways, proofs that can be understood, proofs that embody ideas that are pregnant with further developments…. Most people do not respond to mathematics with such experiences or feelings; they really have no idea what is moving those philosophers.”
Beyond the pleasure of proof (“Eureka!”) lies unfathomable mystery – of at least a couple of varieties. One is the problem addressed in “Is Mathematics Discovered or Invented?” by Timothy Gowers, a professor of mathematics at Cambridge University. Be careful how you answer that question, since the nature of reality is at stake: “If mathematics is discovered, then it would appear that there is something out there that mathematicians are discovering, which in turn would appear to lend support to a Platonist conception of mathematics….”
Or to put it another way and leave Plato out of it for a moment: If “there is something out there that mathematicians are discovering,” then just exactly where is “out there”? Answering “the universe” is dodging the question. We might naively think of arithmetic or even some parts of geometry as some kind of generalization from observed phenomena, But nobody has empirical knowledge of a seven-dimensional hypersphere. So how – or again, perhaps more pertinently, where, in what part of reality – does the hypersphere exist, such that mathematicians have access to it?
A neurobiological argument could be made that the higher mathematical concepts exist in certain cognitive modules found in the brain. (And not in everyone’s, suggests Hacking’s essay.) If so, it would make sense to say that such concepts are created. But if so, the mystery only deepens. Scientists have repeatedly found the tools for understanding the physical universe in extremely complex and exotic forms of mathematics developed by pure mathematicians who not only have no interest in finding a practical application for their work, but feel a bit sullied when one is found.
Translating math’s hieroglyphics into English prose is difficult but – as the two dozen pieces reprinted in Best Writing show – not always completely impossible. Mircea Pitici, the editor, pulls together work at various levels of complexity and from authors who pursue their subjects from a number of angles: historical or biographical narrative, philosophical speculation both professional and amateur, journalistic commentary on the state of math education and its discontents.
And the arrangement of the material is, like the selection, intelligent and even artful. Certain figures (the 19-century mathematicians Augustus de Morgan and William Hamilton) and questions (including those about math as experience and mystery) weave in and out of the volume -- making it more unified than “best of” annuals tend to be.
That said, I am dubious about there being a Best Writing ’13 given the dire implications of certain discoveries (or whatever) by Mayan numerologists. This will be the last Intellectual Affairs column for 2012, if not for all time. I’d prefer to think that, centuries ago, someone forgot to carry a digit, in which case the column will return in the new year. And if not, happy holidays even so.