Republicans in the Michigan House of Representatives are threatening to cut the budget of the University of Michigan if it does not provide more details on its research with stem cells, The Detroit Free Press reported. The Republicans are specifically demanding information about the exact number of stem cell lines at Michigan, something university officials say is more complicated than it may sound. The university has turned over a report on its work with stem cells, but that hasn't satisfied the legislators.
Research universities should incorporate more "arts making" -- the process of creating works of art -- into their curriculums to help develop "new generations of leaders who are adept in the use of all of their creative cognitive faculties," says a new report from a group of campus leaders convened last year by the University of Michigan. The report, developed by administrators and faculty members from about two dozen of the leading research institutions in the United States, examines what the institutions do now (and what they might do) to integrate such work into their curriculums (and extracurricular activities), and how to advocate for a greater role for such a focus.
U.S. appeals court says U. of Oregon may have retaliated against former doctoral student who alleged sex bias in her graduate program; ruling could reshape balance of power between professors and grad students.
College professors are perceived by the public as more unfriendly to religion now than they were seen in 2003, according to the results of a recent poll by the Pew Research Center. Thirty-two percent of respondents said college professors are "unfriendly" toward religion, 37 percent said they view professors as neutral on religion and 14 percent said college professors were friendly to religion. In 2003, 26 percent described professors as unfriendly and 18 percent as friendly.
Republicans and white evangelical Protestants were more likely to say college professors were anti-religion: 56 percent of both groups said professors were unfriendly to religion. Other religious groups, as well as Democrats, generally view professors as neutral.
A theoretical physicist named Eugene Wigner once referred to “the unreasonable effectiveness of mathematics” -- a phrase that, on first hearing, sounds paradoxical. Math seems like rationality itself, at its most efficient and severe. But even someone with an extremely limited grasp of the higher realms of mathematics (your correspondent, for one) can occasionally glimpse what Wigner had in mind. His comment expresses a mood, more than an idea. It manifests a kind of awe.
For example, in the 1920s Paul Dirac came up with an equation that permitted two possible solutions, one of which applied to the electron. The other corresponded to nothing that physicists had ever come across. Some years later, physicists discovered a subatomic particle that did: the positron. The manipulation of mathematical symbols unveiled an aspect of the physical universe that had been previously unknown (even unsuspected). To adapt a line from the Insane Clown Posse’s foul-mouthed appreciation of the wonders of the universe, “This [stuff] will blow your mother[loving] mind.”
True, that. I’ve even felt it when trying to imagine the moment when Descartes first understood that algebra and geometry could be fused into something more powerful than either was separately. (Cartesian grids – how do they work?) But there’s a flipside to Wigner’s phrase that’s no less mind-boggling to contemplate: the existence of “simple” problems that resist solution, driving one generation of mathematicians after another to extremes of creativity. Fermat’s last theorem (formulated 1637, solved 1993) is the most famous example. The four-color map conundrum (formulated 1852, solved 1976) proved misleadingly uncomplicated.
And then there's the great, bewildering problem surveyed in William J. Cook’s In Pursuit of the Traveling Salesman: Mathematics at the Limits of Computation (Princeton University Press). The challenge has been around in one form or another since 1934. It looks so straightforward that it’s hard to believe no one has cracked it – or will, probably, any time soon.
Here’s the scenario: A traveling salesman has to visit a certain number of cities on a business trip and wants to get back home as efficiently as possible. He wants the shortest possible route that will spare him from going through the same city more than once. If he leaves home with just two stops, no planning is necessary: his route is a triangle. With three stops, it’s still something that he might work out just by looking at the map.
But his company, pinched by the economy, needs to “do more with less,” like that ever works. It keeps adding stops to the list. By the time he has five or six calls to make, planning an itinerary has gotten complicated. Suppose he's leaving home (A) to visit five cities (B, C, D, E, F). He starts out with five possibilities for his first destination, which means four for his second stop. And so on -- one fewer, each time. That means the total number of possible routes is 5 x 4 x 3 x 2 x 1 = 120. Our salesman may be relieved to discover that it's really only half of that, since traveling in the sequence ACDBFEA covers exactly the same distance as doing it the other way around, as AEFBDCA.
Still, picking the shortest of sixty possible routes is a hassle. Let the number of cities grow to 7, and it's up to 2,520. Which is crazy. The salesman needs a way to find the shortest trip, come what may -- even if the home office doubled the number of stops. There must be an an app.
Except, there isn't. I don’t mean with respect to available software, but at the level of a method that could solve the traveling salesman’s dilemma no matter how many cities are involved. A computer can tackle problems on a case-by-case basis, using brute force to calculate the distances covered by every possible route, then selecting the shortest. But that’s a far cry from having an elegant, powerful formula valid for any given number of cities. And even the most unrelenting brute-force attack on the traveling salesman problem (TSP) might not be enough. Finding the shortest way around a 33-city route would require calculating the distances covered by an unimaginably vast number of possible tours. I’m not up to typing out the figure in question, but it runs to 36 digits. And that's for a two-digit tour.
Cook explains what would happen if we tried to compile and compare every possible sequence for a 33-city route using the $133 million dollar IBM Roadrunner Cluster at the Department of Energy, which “topped the 2009 ranking of the 500 world’s fastest supercomputers,” The Roadrunner can do 1,457 trillion arithmetic operations per second. Finding the shortest route would take about 28 trillion years – “an uncomfortable amount of time," Cook notes, "given that the universe is estimated to be only 14 billion years old.”
That does not mean any given problem is insoluble, even with an extraordinarily high number of cities. In 1954, a group of mathematicians in California solved a 49-city problem by hand in a few weeks, using linear programming -- which seems appropriate, since linear programming (LP) was developed to help with business decisions on how best to use available resources. In Pursuit of the Traveling Salesman devotes a chapter to the history of LP and the development of a multipurpose tool called the simplex algorithm. (Cook’s treatment of LP is introductory, rather than technical, though it’s not exactly for the faint of heart.)
Other tour-finding algorithms find clusters of short routes, then link them as neatly as possible. If having absolutely the shortest path isn’t the top priority, various methods can generate an itinerary that might be close enough for practical use. And practical applications do exist, even with fewer traveling salesmen now than in Willy Loman’s day. TSP applies to problems that come up in mapping the genome, designing circuit boards and microchips, and making the best use of fragile telescopes in old observatories.
But no all-purpose, reliable, let-N-equal-whatever algorithm exists. It's the sort of cosmic untidiness that some people can’t bear. Cook quotes a couple of mathematicians who say that TSP is not a problem so much as an addiction. Combining brute-force supercomputer processing with an array of tour-finding concepts and shortcuts means that it’s possible to handle really enormous problems involving hundreds of thousands of cities. But that also makes TSP a boundary-pushing test of computational power and accuracy. Finding the shortest route is an optimization problem, but so, in a way, is figuring out how to solve it using the tools at hand.
Since 2000, the Clay Mathematics Institute has offered a prize of a million dollars for the definitive solution to a problem of which TSP is the exemplary case. (Details here.) “Study of the salesman is a rite of passage in many university programs,” Cook writes, “and short descriptions have even worked their way into recent texts for middle-school students.” But he also suggests that the smart money would not bet on anyone ever finding the ultimate TSP algorithm, though you’re more than welcome to try.
Another possibility, of course, is that the right kind of mathematics just hasn’t been discovered yet – but one day it will, proving its “unreasonable effectiveness” by solving TSP as well as problems we can’t even imagine at this point. As for our hypothetical traveler, he’d probably feel envy at one of the Cook’s endnotes. The author “purchased 50 years of annual diaries written by a salesman” via eBay, he writes, “only to learn that his tours consisted of trips through five or six cities around Syracuse, New York.” He didn't need an algorithm, and got to stay home on weekends.
Science leaders in Japan are warning that the country's universities are facing a shortage of young research talent, Nature reported. In the last 30 years, the number of science faculty members at state universities has grown from 50,000 to 63,000, but the number under the age of 35 has dropped from 10,000 to 6,800. Tight budgets have forced universities to limit hiring, leading to concerns about the future of science programs that aren't recruiting enough new professors.
Paul H. Frampton, a physicist who holds an endowed chair at the University of North Carolina at Chapel Hill, is in an Argentine jail facing cocaine charges, and he is fighting both those charges and the university's decision to suspend his salary, The News & Observer of Raleigh reported. Frampton said that the cocaine was planted in his luggage, and that he is confident he will be able to show that in court. But he said he needs his salary paid, and is frustrated that it was cut off. Frampton said that Provost Bruce Carney blocked his pay out of professional jealousy. A university spokeswoman declined to say why Frampton's pay was suspended, but university officials have noted that he is not teaching as scheduled. But Frampton said he has continued to work 40-plus hours a week in prison, and has been advising his graduate students from afar (one of his advisees confirmed this).
In 2010, the National Science Foundation and National Endowment for the Arts convened a historic workshop -- it was their first jointly funded project. This meeting marked the beginning of a new level of national conversation about how computer science and other STEM disciplines can work productively with arts and design in research, creation, education, and economic development. A number of projects and follow-up workshops resulted in 2011. I was lucky enough to attend three of these events and, in the midst of all the exciting follow-up conversations, I couldn't help but wonder: What about the digital humanities?
After all, the digital humanities have made it now. A recent visualization from University College London shows more than 100 digital humanities centers spread across the globe. There are dedicated digital humanities funding groups within the National Endowment for the Humanities and Microsoft Research. The University of Minnesota Press published a book of Debates in the Digital Humanities in January.
So why doesn't the digital humanities have more of a seat at the table? Why is there the stereotype that, while computer scientists and digital artists have much to discuss, digital humanists only want to talk about data mining with the former and data visualization with the latter? I believe it is because the perception has developed, helped along by many in the field itself, that digital humanities is primarily about data.
Certainly a grasp of data -- the historical record, our cultural heritage -- is a great strength of the humanities. But in the digital world, the storage, mining, and visualization of large amounts of data is just one small corner of the vast space of possibility and consequence opened by new computational processes -- the machines made of software that operate within our phones, laptops, and cloud servers.
A key experience in my journey to understanding this began with a debate about James Meehan's Tale-Spin, the first major story generation system. I had always been basically uninterested in Tale-Spin, though I knew it was considered a landmark on the computer science end of electronic literature. I simply didn't get excited by the stories I had seen reprinted in the many scholarly discussions of the system.
During the debate it became clear that I would have to look a little deeper. When I looked at Tale-Spin's computational processes, what I found was surprising and complex, as evocative and strange as any of Calvino's invisible cities. Tale-Spin operates according to rules constructed as a simulation of human behavior, built according to cognitive science ideas that were current at Yale in the mid-1970s, when it was designed. For example, in this model, when characters interact, they take elaborate psychological actions, projecting multiple possible worlds to see if any course of action might create a world they desire.
In short, I learned that it is Tale-Spin's processes that have the literary value, creating a fictional world that gets its fascinating strangeness from taking a recognizable aspect of human behavior, exaggerating it, and stripping away almost everything else -- answering the question, "What would fiction look like if we accept the model of humanity being proposed by this kind of cognitive science?" More broadly, reading the processes of Tale-Spin also helped me think about the limits of simulations of human behavior, even those informed by the most recent scientific ideas, as well as how ideas and biases can be encoded in software in ways that are invisible to those who only see the output.
Finally, it helped me learn an important lesson about making media: fascinating, successful, hidden processes do little to make the audience experience stronger. As a result of these realizations I had to apologize to colleagues for dismissing Tale-Spin -- and my fascination with the project grew until it became a central object of study for my book Expressive Processing.
Over the years since, it has become clear to me that there are many other processes that cry out for attention. All the tools of our software society, from the document-crafting Microsoft Word to the architecture-designing AutoCAD, are enabled and defined by processes. Software processes operate Walmart's procurement system and Homeland Security's terrorist watch list. The interactivity of mobile apps and websites and video games is created through the design of processes. In other words, it is human-designed and human-interpretable computational processes that enable software to shape our daily work, our homes, our economy, our interpersonal communication, and our new forms of art and media. Processes even enable the data mining that drives much digital humanities work (and Amazon's recommendation system).
For these reasons and more, when computer scientists and digital artists get together, most of what they talk about is novel processes. Why invite digital humanists, if they're going to keep dragging the conversation back to data?
Of course, this stereotype is a distortion of the history and present of humanist engagement with the digital world, but it passes for truth far too often. Something needs to be done to fight it. I believe all of us with a stake in the future of the digital humanities -- and perhaps more of us have a stake than realize it at the moment -- should push for a vision of the field that acknowledges that it has never simply been about data. Here are two areas where I think pressure is particularly important.
First, the humanities is not simply defined by the data it has mastered. Whether in literature, philosophy, media studies, or some other discipline, humanists understand the data they study through particular methods. Two decades ago Phil Agre powerfully demonstrated that humanities methods could shed important new light on software processes. In his Computation and Human Experience, he performs close readings of computational systems and situates them within histories of thought. His analysis serves a primary humanities mission of helping us understand the world in which we live, while also helping reveal sources of recurring patterns of difficulty for computer scientists working in AI.
It is an early example of what is now increasingly being called "software studies" -- a tradition in which my work on Tale-Spin participates. In software studies, humanities methods and values engage with the specific workings of computational processes. This sort of approach has the potential to become an exciting point of connection between the humanities and computer science, both pedagogically (as a route to the "computational thinking" that is increasingly being put forward as a key component of 21st-century general education) and as a critical and ethical complement to the models of interpreting processes found in most computer science.
The good news is that work of this sort is already becoming more established, with the MIT Press having recently founded both a book series for software studies and one for its sibling "platform studies" (which focuses on the material conditions that shape and inspire the authoring of computational processes). The promise of software studies is that the digital humanities can be central to one of the most pressing issues of our time: helping us both to understand and to live as informed, ethical people within a world increasingly defined and driven by software.
And we can also go further, helping to create this world. More than a quarter-century ago, Brenda Laurel's dissertation established how deep knowledge of subject matter developed within the humanities -- in Laurel's case, classical drama -- could be used to inform the design of new technologies. Laurel became a leading creator and theorist of digital media by adapting insights and models from a long history of humanities scholarship on the arts. Such work is, if anything, even more vital today -- and is the second area of digital humanities which I believe we should press forward. With the rise of computer games as a cultural and educational form (along with other emerging media technologies) computer scientists are increasingly being called, both in universities and industry, to develop computational processes that make new forms of media possible.
But computer science has no knowledge or methods appropriate for guiding or evaluating the primary, media-focused aspects of this work. Computer science's next level of dialogue with the digital arts community is certainly encouraging, but there is also an essential role for the humanities to play in both contributing to innovative media technology projects and helping set the agenda. Unfortunately, unlike software studies, this area of digital humanities work does not yet have a name and is often not even identified as humanities, despite its deep grounding in humanities knowledge and methods (the scholars involved generally also have identities as digital artists/designers or computer scientists).
But the importance of addressing this lack is becoming clear. In fact, I am happy to announce that an unprecedented group of partners (including the NSF, NEH, NEA, and Microsoft) have stepped forward to help convene a workshop on this topic that Michael Mateas, Chaim Gingold, and I will host at UC Santa Cruz later this year. Our planned outcomes range from developing a greater understanding of this area of digital humanities to matchmaking a set of projects that are explicitly at the intersection of computer science, digital arts, and digital humanities.
Now for the bad news. Unfortunately, as digital humanities is coming to public consciousness, the vision of the field being put forth in the most high-profile venues leaves out entirely such possibilities as these. In January, Stanley Fish wrote in The New York Times that digital humanities is concerned with "matters of statistical frequency and pattern," and summarized digital humanities methodology as "first you run the numbers, and then you see if they prompt an interpretive hypothesis." Earlier in January, at the Modern Language Association mega-conference, a workshop on Getting Started in Digital Humanities suggested that the field's promise lies in the fact that "Scholars can now computationally analyze entire corpora of texts or preserve and share materials through digital archives."
How will digital humanities ever come to be something more diverse and relevant if both detractors and supporters seem to agree that its sole focus is storing and analyzing data? I believe digital humanists must begin by recognizing and developing important areas of work, already part of the field's history, that such conceptions marginalize. And those in the field must see these areas as important places for digital humanities to grow, even if they lie beyond the narrow confines of the wall digital humanists are inadvertently helping build around themselves.
Noah Wardrip-Fruin is associate professor of computer science and co-director of the Expressive Intelligence Studio at the University of California, Santa Cruz. His most recent book, Expressive Processing, has just been published in paperback.