Information, Please

People who met Aldous Huxley would sometimes notice that, on any given day, the turns of his conversation would follow a brilliant, unpredictable, yet by no means random course. The novelist might start out by mentioning something about Plato. Then the discussion would drift to other matters -- to Poe, the papacy, and the history of Persia, followed by musings on photosynthesis. And then, perhaps, back to Plato.

So Huxley's friends would think: "Well, it's pretty obvious which volume of the Encyclopedia Britannica he was reading this morning."

Now, it's a fair guess that whoever recounted that story (to the author of whichever biography I read it in) meant to tell it at Huxley's expense. It's not just that it makes him look like an intellectual magpie, collecting shiny facts and stray threads of history. Nor even that his erudition turns out to be pre-sorted and alphabetical. 

Rather, I suspect the image of an adult habitually meandering through the pages of an encyclopedia carries a degree of stigma. There is a hint of regression about it -- if not all the way back to childhood, at least to preadolescent nerdishness. 

If anything, the taboo would be even sterner for a fully licensed and bonded academic professional.
Encyclopedia entries are among the lowest form of secondary literature. Very rare exceptions can be made for cases such as Sigmund Freud's entry on "Psychoanalysis" in the 13th edition of the Britannica, or Kenneth Burke's account of his own theory of dramatism in The International Encyclopedia of the Social Sciences. You get a certain amount of credit for writing for reference books -- and more for editing them. And heaven knows that the academic presses love to turn them out. See, for example, The Encyclopedia of Religion in the South (Mercer University Press), The Encyclopedia of New Jersey (Rutgers University Press) and The International Encyclopedia of Dance (Oxford University Press), not to mention The Encyclopedia of Postmodernism (Routledge).

It might be okay to "look something up" in an encyclopedia or some other reference volume. But read them? For pleasure? The implication that you spend much time doing so would be close to an insult - a kind of academic lese majesty. 

At one level, the disdain is justified. Many such works are sloppily written, superficial, and/or hopelessly unreliable. The editors of some of them display all the conscientiousness regarding plagiarism one would expect of a failing sophomore. (They grasp the concept, but do not think about it so much as to become an inconvenience.)

But my hunch is that social pressure plays a larger role in it. Real scholars read monographs! The nature of an encyclopedia is that it is, at least in principle, a work of popularization. Probably less so for The Encyclopedia of Algebraic Topology, assuming there is one. But still, there is an aura of anti-specialization and plebian accessibility that seems implicit in the very idea. And there is something almost Jacobin about organizing things in alphabetical order.

Well then, it's time. Let me confess it: I love reading encyclopedias and the like, at least in certain moods. My collection is not huge, but it gets a fair bit of use. 

Aside from still-useful if not cutting- edge works such as the four-volume Encyclopedia of Philosophy (Macmillan, 1967) and Eric Partridge's indispensible Short Etymological Dictionary of Modern English Origins (Macmillan, 1958), I keep at hand any number of volumes from Routledge and Blackwell offering potted summaries of 20th century thinkers. (Probably by this time next year, we'll have the 21st century versions.) 

Not long ago, for a ridiculously small price, I got the four paperbound volumes of the original edition of the Scribners Dictionary of the History of Ideas, first published in 1973 -- the table of contents of which is at times so bizarre as to seem like a practical joke. There is no entry on aesthetics, but one called "Music as Demonic Art" and another called "Music as a Divine Art." An entry called "Freedom of Speech in Antiquity" probably ought to be followed with something that brings things up to more recent times -- but no such luck. 

The whole thing is now available online, with its goofy mixture of the monographic ("Newton's Opticks and Eighteenth Century Imagination") and the clueless (no entries on Aristotle or Kant, empiricism or rationalism). But somehow the weirdness is more enjoyable between covers.

And then, of course, there is the mother of them all: the Encyclopedia or Rational Dictionary of the Sciences, Arts, and Crafts that Denis Diderot and friends published in the 1750s and '60s. Aside from a couple of volumes of selections, I've grabbed every book by or about Diderot in English that I've ever come across.

Diderot himself, appropriately enough, wrote the entry for "Encyclopedia" for the Encyclopedia.

The aim of such a work, he explained, is "to collect all the knowledge scattered over the face of the earth, to present its general structure to the men with whom we live, and to transmit this to those who will come after us, so that the work of past centuries may be useful to the following centuries, that our children, by becoming more educated, may at the same time become more virtuous and happier, and that we may not die without having deserved well of the human race."

Yeah! Now that's something to shoot for. It even makes reading encyclopedias seem less like a secret vice than a profound obligation.

And if, perchance, any of you share the habit -- and have favorite reference books that you keep at hand for diversion, edification, or moral uplift -- please pass the titles along below....

Scott McLemee
Author's email:

The Power of 3

Who would have thought that the number three could wield such power?

It’s such a simple, unimposing numerical value; almost cute in appearance. Yet as it turns out, this small digit is value-packed. "Three" is apparently a well-known assessment measure, a highly accurate indicator of academic prowess. Or so I’m told. This number is the key at my institution, at least in most departments, to determining which young faculty should be kept for the long haul or discarded back into the pile. If the latter occurs, it is a gross understatement to simply say that a pink slip has been handed out. There is finality in that decision, because so many who have been sent packing at one institution cannot endure the thought of being rejected again. Oftentimes, a career in academia has been put to an end.

How did three get selected to be so important, so insightful, so utterly determinative? Why was this number picked instead of, say, 4, or even 10? I do not have those answers. What I do know, or have been told, is that standards must be set, a proverbial bar must be attained. And that at any respectable place of employment, at least one to be recognized by U.S. News and World Reports, the standard cannot be lower than three.

I am referring, if you hadn’t guessed, to the number of publications needed, under most circumstances, to be awarded tenure at my institution. Why am I so cynical about this? No doubt some of you may say that three publications is a reasonable expectation over a six year time frame. Let me be clear: The actual value of three publications is not that troublesome to me.

What frustrates me is that the standard for research and scholarship at my institution and in my department has (I should say had) always been vague. In fact, during my ‘probationary faculty’ daze, I would meet annually with my department chair and dean to ask what the target was for research publications. I wanted a number. If nothing else I was looking for reassurance and piece of mind. I would usually hear "You are on the mark" or "Don’t worry, your research is good." But I was never offered a numerical value.

Asking other members of the department did not help. No one offered a firm target that I could set my sights on. After asking many times, I was finally told that the department did not have an exact number that the tenured members were looking for. Rather, I would need at least one publication for tenure, but two or three would be better. My interpretation of that statement was three is safe, but more is probably in the comfort zone.

So prior to my time to apply for tenure, I worked to far surpass the firmly vague target presented to me. Mission accomplished. I surpassed the target, and thankfully, the tenure process went very smoothly. However, I struggled with the reality that no one would commit to a narrowly focused standard for research during my probationary window. Mentally, I wanted to know the exact goal. But the exact height of the illustrious bar that I needed to clear was kept secret.

As I understand it, the vagueness allowed some wiggle room. For example, in an instance when someone was a dynamic teacher and provided exceptional service to our students, department and/or college, it was potentially permissible to be only so-so as a scholar and still make tenure. A hard and fast research standard would conceivably injure these individuals. And after all, I needed to remember that our primary focus as a college and department was on undergraduate education. Teaching is of prime importance at schools with such a mission, and service rivals research. Although vagueness is an approach that is difficult for a biologist like me to grapple with, I came to appreciate the intent.

This admittedly awkward research standard was the norm at my institution for quite a long period of time -- until recently. But research expectations have changed college-wide. Determining exactly when has been difficult, but the climate here is most certainly at a different place than it was a few years back. Vagueness was replaced with exactness. A numerical value was established as the minimum standard for being on track for tenure: three is that value, and the value shall be three. This is the target that the administration uses as the measure for evaluating a tenure application, at least for those in the sciences. Notice that I have not mentioned teaching or service. The reality is that those responsibilities have been moved to a somewhat different level of significance in the evaluation process.

I sought such a standard when I was an untenured faculty member, so shouldn’t I be ecstatic now, or at least satisfied? But I am not. The number that I have mentioned arrived rather suddenly on campus and in stealth-like fashion. And it has taken a toll.

About seven years ago, my department hired a promising young biologist who was enthusiastic about teaching and already established as a talented researcher. She was hired during our “vagueness” period. She was told the same thing that I was years earlier: for research publications, one is a must, more is better.

But while this young faculty member was traveling down this path, a detour suddenly appeared: The exactness period was ushered in. The short version of a rather nasty story is that my colleague was seemingly held to the new standards. It goes without saying that, if true, this was not a fair or appropriate practice for tenure protocol. No doubt you have predicted the outcome of her tenure decision: She was denied. Notice that I have not mentioned whether I thought she deserved tenure, because that is not the point. Her shortcoming was that she did not meet “the power of 3.”

My message is not a mere plaint on behalf of a fallen colleague. It is larger than a single individual. The current trend in higher education is to focus on scholarship, more correctly on scholarly output, most notably at schools that have traditionally served as predominantly undergraduate institutions. This in and of itself is not a poor strategy. I personally believe that teaching and research are intimately woven, and that an excellent teacher is most likely an outstanding scholar. So I embrace the idea of elevating research at my institution when the pursuit of knowledge and discovery will generate excitement and passion within the classroom.

But an appropriate balance between teaching and research must be established. The reality is that any faculty member at an undergraduate college or university must juggle heavy teaching responsibilities with mentoring, advising and college service, all while remaining a productive scholar. Unfortunately, as the demands for the latter increase, it can only come at the expense of the other responsibilities, including time dedicated to family. This means that teaching, mentoring and advising lose importance out of necessity.

Clearly, priorities are being confused. In the current market place, many institutions are competing for high quality students. The competition is made more intense by the soaring costs of attending college, particularly at private institutions. Regardless, schools that were outstanding because of the original mission (educating undergraduates) are trying to redefine themselves. The cost is becoming enormous in terms of the potentially diminished education provided to the students and in terms of the faculty that get discarded along the way.

This fall, I started the term without my friend and colleague. She has moved on. Thankfully for the students, she has stayed in academia. I understand that change is inevitable and that progress can only come through change. So maybe our new standards will in fact elevate my college to a higher level, allowing us to achieve academic accomplishments our students never reached before. Maybe.

Or just maybe the members of the college will like the taste of elevated scholarship and want to drink more rather than integrating it with our undergraduate mission. I am afraid the latter will take hold. The faculty governance on my campus already is debating whether the tenure standard should be raised even higher, and a member of the Board of Trustees has stated that the value of an undergraduate education goes up each time a publication appears with our institution’s name.

A friend of mine has argued that trends in higher education move like a pendulum, so this current craze will come back to equilibrium at some point. I’m not so confident. It appears that our pendulum lacks counterbalance, at least at the moment, and may just as easily rise so high that it crashes back down on top of us.

David B. Rivers
Author's email:

David B. Rivers is an associate professor of biology at Loyola College in Maryland.

Return to Earth College

I’m not much one for reunions at my alma mater. But I did have a 25th reunion last month at one of my journalistic alma maters, so to speak, College of the Atlantic, the small, environmentally oriented, alternative liberal arts college located off the coast of Maine. It was one of the colleges I covered during my first tour of duty as a freelance education writer during the late 1970s and early 1980s.

Like most of the stories I did during my early, gallivanting days, the one I did about COA began with a hunch. The little information I had about this remote, decade-old, solar-powered cousin of Bennington, Goddard, et al., was that COA offered a bachelor of arts degree in something called human ecology, and that staff and students spent a lot of time observing and tracking whales. I was intrigued. 

And so, armed with an assignment, off I flew to Bar Harbor, Maine, for what turned out to be one of my most memorable assignments covering academe. I was immediately taken with the college’s Noah-like president, Ed Kaelber, and his vice president, Sam Eliot, whose environmentalist passion was leavened by a self-deprecatory sense of humor.

What moved COA’s founders to establish their college-cum-environmentalist colony back in 69?, I asked Eliot one blustery evening, as we huddled over coffee in his office in the college’s Ark-like wooden administration building. "Basically, we came out here to save the world," Eliot said.  “Now,” he said with a grin, “we’re concentrating on Maine.”

And saving Maine the earnest eco-missionaries of COA were, via such inspired stratagems as a dead minke whale that had washed up near the college and had been converted into a mobile mammalian biology diorama for the benefit of the local populace. Whale on Wheels, it was called. COA students were largely responsible for preserving Maine’s Great Heath, an ecologically unique bog. The college’s Harbor Seal Project had helped rescue many abandoned or stranded seals.  And the Department of Interior thought highly enough of the biologist Steve Katona’s course, Whales of the North Atlantic, to award his class a contract for the Mount Desert Island Whale Watch.  With 180 students and 15 faculty members, classes at the spare, island-based campus were small, education an intense, hands-on affair.  I never saw a faculty as inspired and committed as COA’s.    

For the most part, classes at COA were as intellectually rigorous as anywhere, if not more so. Some people might have difficulty defining exactly what human ecology meant -- "it's … a seagull" said one misty-eyed student -- and yet COA students were making real connections between man and nature. Here, in December 1980, as the new materialistic morning of Ronald Reagan was dawning, was a college really dedicated to changing and, yes, saving the world.

To a sixties survivor that was bracing to behold. "If the deterioration of the environment keeps going the way it is now," in the prescient words of Glen Berkowitz, one of the many dynamic, clear-eyed students I met during my fascinating sojourn in Bar Harbor, "people will have to use COA graduates." He was right. (In fact, Berkowitz, who graduated in 1982, went on to become a senior consultant with Boston’s massive Big Dig project, where he advised the builders on the human impact of the dig, and is now involved with a wind power project for the city’s harbor.) He's but one of the many COA graduates who have used their unique education to do social and environmental good. Others include Chellie Pingree, head of Common Cause and Bill McLellan, a University of North Carolina research scientist who National Public Radio recently described as the federal government’s “go-to guy on marine mammal research.”  

I had planned on a visit of several days. Instead I wound up staying for several weeks. My subsequent dispatch about “Earth College,” as I good naturedly dubbed the place, reflected my affection for the spunky laboratory school. "To be sure, the college needs a gymnasium and a student center," I reported. "But the College of the Atlantic is alive and well. That in itself is something to celebrate." 

Privately, I wasn’t so optimistic. The future for alternative or experimental colleges, I well knew, was increasingly grim, having recently reported the demise of one of COA’s experimental siblings, Eisenhower College, whose lofty minded World Studies program and holistic educational philosophy was not unlike COA’s.

Hence my delight and surprise, upon recently visiting the college on the Web, to encounter an institution that, at least on the evidence of its kaleidoscopic site, was thriving.  But Web sites can be deceiving. It was time to check out College of the Atlantic again.  

And so, last month, just as I had a quarter of a century before, I set off for the college’s rustic, coastal Maine campus, next to Acadia National Park. Once again I found myself auditing classes, hanging out with COA students and faculty in the main dining room, listening to the swooning sea gulls, just as I did long ago.

My green reunion. Best reunion I ever had.

To be sure, I learned from some of the veteran COA faculty I met up with again, COA did wind up having its own Sturm und Drang period in the early 80s, including a civil war pitting faculty and staff who wished to keep the college as a college against another faction that wanted COA to become more of a think tank. The former won. However, enrollment at the beleaguered campus dropped to a mere hundred. "We almost lost the college," one teacher said.

Nevertheless, under the leadership of Steve Katona, the college’s savvy whale-watcher-turned president, who has been at the college’s helm for since 1992, COA has survived. Now, with an enrollment of 270 students -- over 20 percent of them from abroad -- and 26 faculty, COA is, indeed, thriving. Shedding the "experimental" label that once put off parents of prospective students, the pioneering institution is competitive with some of the best mainstream liberal arts colleges in the country, while the human ecology concept and educational philosophy that COA pioneered has gained respect.     

On the surface, COA is no longer as "crazy" as it once was. The college has an eye-catching logo now, and an expensive viewbook. The food is no longer strictly vegetarian. COA’s ponytail is gone.

And yet, I could see, in the small, intensely participatory classes and laboratories I audited, and the interactions I had with students and faculty, that the college’s essence and mission is unchanged. Here, still, on this remote island, off the coast of Maine, is a community unabashedly committed to saving the world.  

One professor, Davis Taylor, is an economist and former Army captain who attended West Point. He said that while at first blush one could hardly think of two institutions more different than West Point and COA, he saw similarities between the two. "Both have a sense of mission," Taylor said, and “both emphasize systems thinking.”

As one student after another, including ones from as far away as Serbia and Seattle, told me, “I came here to make a difference.”

In the best sense, I could see, during the rainy but otherwise mind-and-spirit expanding week I spent in Bar Harbor. It was clear in a horizon-busting class in environmental history, or an impromptu world music session in the college greenhouse. College of the Atlantic is still alive and crazy after all these years.  And, for one of its early champions, and as one who believes that the greatness of the American higher education system lies in its multiplicity, that was reassuring to see.  

I could also see that original spirit in a hands-on, feet-in conference in riverine planning that I (literally) waded into, where COA faculty, staff and local planners contributed to show journalists how it’s possible to affect a community planning system on an environmental and inter-county level.   

So there I was one stormy afternoon hanging out with Bill Carpenter, the novelist and poet who has taught at COA since its founding 36 years ago, sifting the college's saga over strong coffee in his cozy, book-lined office. We had returned from an exciting, syncopated session of “Turn of the Century,” an interdisciplinary class in cultural history that Carpenter teaches along with the artist JoAnne Carpenter and the biologist John Anderson, in which the three professors enthusiastically riff off each other, in between questions from the packed, palpably delighted class of 25 (which for COA is huge).

“So, what was your original vision?”  I asked Carpenter, as we reminisced about the college’s wild and woolly early days.

“This was our vision,” he said, with finality.   

Here’s to survivors.

Gordon F. Sander
Author's email:

Gordon F. Sander, an Ithaca-based journalist and historian has written about higher education for The Times Higher Education Supplement, The Chronicle of Higher Education, The New York Times and many other publications.  He was recently artist-in-residence at Cornell University's Risley College for the Creative and Performing Arts. His most recent book is The Frank Family That Survived: a 20th Century Odyssey (Random House UK).

We're All Getting Better

Universities celebrate their achievements in an endless series of public pronouncements. Like the imaginary residents of Lake Wobegon, all universities are above average, all are growing, and all improve. In most cases, these claims of progress rest on a technically accurate foundation:   Applications did increase, the average SAT scores did rise, the amount of financial aid did climb, private gifts did spike upward, and faculty research funding did grow.

No sensible friend of the institution wants to spoil the party by putting these data points of achievement into any kind of comparative context. There is little glory in a reality check.

Still, the overblown claims of achievement often leave audiences wondering how all these universities can be succeeding so well and at the same time appear before their donors and legislators, not to mention their students, in a permanent state of need. This leads to skepticism  and doubt, neither of which is good for the credibility of university people.  It also encourages trustees and others to have unrealistic expectations about the actual growth processes of their institutions.

For example, while applications at a given institution may be up, and everyone cheers, the total pool of applicants for all colleges and universities may be up also.  If a college's number for the years 1998 to 2002 is up by 10 percent, it may nonetheless have lost ground since the number of undergraduate students attending college nationally grew by 15 percent in the same period. Growth is surely better than decline, but growth relative to the marketplace for students signals real achievement.

Similar issues affect such markers as test scores. If SAT scores for the freshman class rise by eight points the admissions office should be pleased, but if nationally, among all students, test scores went up by nine points ( as they did between 1998 and 2004 ) the college may have lost ground relative to the marketplace.

An actual example with real data may help. Federal research expenditures provide a key indicator of competitive research performance. Universities usually report increases in this number with pride, and well they should because the competition is fierce. A quick look at the comparative numbers can give us a reality check on whether an increase actually represents an improvement relative to the marketplace. 

Research funding from federal sources is a marketplace of opportunity defined by the amount appropriated to the various federal agencies and the amount they made available to colleges and universities. The top academic institutions control about 90 percent of this pool and compete intensely among themselves for a share. This is the context for understanding the significance of an increase in federal research expenditures.

A review of the research performance of the top 150 institutions reporting federal research expenditures clarifies the meaning of the growth we all celebrate ( TheCenter, 2004). The total pool of dollars captured by these top competitors grew by about 14 percent from 2001 to 2002. While almost all institutions saw an increase in their research performance over this short time, a little over half (88 institutions) met or exceeded the growth of the pool. Almost all the others also increased their research expenditures, but even so, they lost market share to their colleagues in the top 150.

If we take a longer-range perspective, using the data between 1998 and 2002, the pool of funds spent from federal sources by our 150 institutions grew by 45 percent. For a university to keep pace, it would need to grow by 45 percent as well over the same period. Again, about half of our 150 institutions (80) managed to improve by at least this growth rate.  Almost all the remaining institutions also improved over this longer period, but not by enough to stay even with the growth of opportunity.

Even comparative data expressed in percentages can lead us into some confused thinking.  We can imagine that equal percentage growth makes us equally competitive with other universities that have the same percentage growth. This is a charming conceit, but misrepresents the difficulty of the competition.  

At the top of the competition, Johns Hopkins University would need to capture a sufficient increase in federal grants to generate additional spending of over $123 million a year just to stay even with the average total increase from 2001 to 2002 (it did better than that, with 16 percent growth). The No. 150 research university in 2001, the University of Central Florida, would need just over $3 million to meet the 14 percent increase in the total pool.  However, UCF did much better than that, growing by a significant 36 percent. 

Does this mean UCF is outperforming Hopkins? Of course not. JHU added $142 million to its expenditures while UCF added $7.6 million.  

The lesson here, as my colleague at the system office of the State University of New York, Betty Capaldi, reminded me when she suggested this topic, is that we cannot understand the significance of a growth number without placing it within an appropriate comparative context or understanding the relative significance of the growth reported.  

It may be too much to ask universities to clarify the public relations spin that informs their communications with the public, but people who manage on spin usually make the wrong choices.

John V. Lombardi
Author's email:

The Great Mismatch

Doctoral education in the United States has changed rapidly over the last 30 years, with increasing specialization and the emergence of new sub-fields for graduate study. Depending on the nature and size of a university, some of these fields and sub-disciplines fit into traditional academic departments, while others demand their own departments or even colleges. Examples of the former abound, such as post-colonial studies, which may often find a comfortable home in an English or comparative literature department. In the latter category are fields like criminal justice, public policy, social work and nano-scale science and engineering -- highly-developed fields that attract increasingly large numbers of students and significant government and foundation funding.

We live within an academic marketplace of ideas, and the best institutions respond to the emergence of new areas of inquiry with vigor. Indeed, research universities can be judged by their ability to recognize and institutionalize new areas and disciplines, supporting excellence within them and nurturing their growth. Scholars typically lead administrators in these efforts, writing books that outline possible boundaries of a new field, establishing journals to define the area, or gathering colleagues for forward-looking conferences intended to advance cutting-edge approaches and methods.

From our standpoint, the more fields and defined areas of doctoral study, the better:  Formal establishment of these areas is typically the result of tremendously high student and scholarly demand. New areas of study are also the result of significant investments by universities and, in the case of public institutions, taxpayer dollars. Not surprisingly state officials and the public are eager for an accounting of how their investments stack up against others. This issue becomes all the more important when, as is true with the National Research Council ratings project, participating public as well as private institutions must pay to be part of the study.

Given that pushing the research envelope is one of the central tenets of any great university, it seems ironic that a survey designed to evaluate the quality and breadth of research would leave so much of our nation’s research untouched. Despite the imagination, interdisciplinarity, and fluidity one finds across the academy in recognizing emerging fields, our most prominent rating system -- the NRC Assessment of Research-Doctorate Programs -- has not responded, and in fact has resisted the change we see around us.

This fall the NRC released its new taxonomy, listing the fields that would be assessed and those that would not be studied. The new taxonomy reflects our worst fears for the assessment of Ph.D. programs: It fails to recognize a large number of thriving and vitally important fields where some of the most talented researchers in the world can be found. Among these fields are criminal justice, public administration and policy, social work, information science, gender studies, education, and public health. We have expressed our strong objections about these exclusions to Ralph J. Cicerone, president of the National Academy of Sciences, and to Charlotte Kuh, study director for the NRC Assessment. We have received no response, and other academic leaders have been treated with the same disregard when they have challenged plans for the new assessment. Such behavior seems especially problematic given the importance of the NRC study for institutions and researchers.

Placing fields like gender studies and information studies in the new, nebulous "emerging fields" category -- fields that will not be rated -- does not solve the problem in the least, but simply steers important scholarly endeavors in a giant black box. The justification of the taxonomy boldly notes that "emerging areas of study may be transitory," hence it is risky to evaluate them with the same rigor used for other fields. From what we can discern at least, information science, the study of race, of ethnicity, of sexuality, and gender have already emerged, and have profoundly changed the academy for the better. We imagine that scholars in these fields are not transitory in the least: A large number of them hold endowed chairs, run centers, manage departments, edit journals, lead foundations and run major institutions.

To make matters even more painful for us, for our faculty, and for colleagues around the nation, the National Academies recently asked for our financial support for the project -- a contribution of $20,000 for larger research universities like our own. We felt compelled to pay the price, but we did so reluctantly and over the strong objections of leading scholars on our campus.

One can critique numerous aspects of the NRC rating system, and a variety of leaders in higher education have done so quite eloquently for more than a decade since the last report. The data collection takes years to compile, and these data quickly become outdated as faculty members move and institutions change. We understand that the new system will involve online questionnaires and include a database that can be updated annually, and we appreciate the National Academies’ efforts in this regard. Another difficulty with earlier NRC studies has been the inclusion of reputational surveys. The forthcoming NRC study has promised to eliminate the reputational rankings from its rating system, and this, too, is an improvement. Among the worst offences of the system has been the bias toward large programs; much of the variance in previous ratings can be explained elegantly by department size (The 600-pound gorilla of a department, even with many unproductive scholars, will come out ahead of the smaller and higher quality programs). Perhaps the questionnaires planned for institutions and admitted-to-candidacy doctoral students in selected fields will help add a new dimension to program quality that will compensate in some programs for differences in size.

But these revisions, while potentially significant, only make the intentional and unexplained omission of major fields of knowledge, critical to the development of the academy, more inexplicable. Methodological change is not much of an advance if one is not measuring the right population of fields and disciplines. A social science parallel, from public opinion research, is relevant here:  You can refine a survey instrument all you like, sweating over question wording, order effects, and non-response to the survey. But if you are asking respondents about banal issues of little political import, why bother?

There is an even more troubling irony in the current effort. The NRC has chosen not to include “those fields for which much research is directed toward the improvement of practice,” such as Ph.D. programs in “social work, public policy, nursing, public health, business, architecture, criminology, kinesiology, and education.” This approach, of course, flies in the face of a recently-released report on “The Responsive Ph.D.” by the Woodrow Wilson National Fellowship Foundation. This report identifies the principle of “a cosmopolitan doctorate” as central to the future of the Ph.D. The report emphasizes that such a doctorate “will benefit enormously by a continuing interchange with the worlds beyond academia” and calls upon doctoral education to “open to the world and engage social challenges more generously.” The NRC assessment, by excluding so many well-established Ph.D. programs, will simply have the effect of reifying the status quo at research universities, instead of helping us respond boldly to the loud and chronic public call for an open and responsive academy.

The Taxonomy Committee argues that the task of evaluating research in these fields lies “beyond the capacity of the current or proposed methodology.” We do not accept this argument as valid, particularly given the proposed scope and expense of the projected NRC study. Further, the taxonomy displays no systematic logic with regard to which applied and interdisciplinary programs are included and which are excluded. Why include nutrition or pharmacology, clearly applied fields, but not criminal justice? Why is the study of sexuality not included while German linguistics and Latin American Literature both are? There is no decision rule in sight, and the taxonomy does not even come close to matching the current landscape of the academy. Perhaps if the NRC had retained the reputational measures, they might have been able to mount an argument about excluding particular fields. But, ironically, the new approach makes the taxonomy more distant from reality. It is removed from the marketplace of ideas, and excludes the voice of the scholarly community.

Apparently the NRC is not open to arguments like the ones above, and as a result, the ratings they will eventually produce will not reflect a great deal of the most important scholarship in higher education today. Not only will the final report have gaping holes, ignoring the work of thousands of scholars, but the NRC will also fail to recognize that interdisciplinary research with practical application matters immensely.

We predict that this next round of results will be received -- whenever it is complete -- as a dinosaur, an artifact of uneven logic and old-fashioned thinking about what constitutes true scholarly discovery. We are grateful that other assessment systems are appearing and regret that the NRC will spend over $5 million on a quickly outdated effort to assess graduate education. Thankfully, such short-sightedness will not stop our best scholars from developing new approaches, forging innovative fields, training hungry students, and changing the world for the better through their work. We call on the National Academies  – yet again -- to reconsider their taxonomy, so that leaders in higher education can demonstrate to our public officials that we are capable of evaluating the very research enterprise with which we have been entrusted.

Kermit L. Hall and Susan Herbst
Author's email:

Kermit L. Hall is the president and Susan Herbst is the provost of the State University of New York at Albany.

Minding the Student Client

Too seldom do we ask graduate students in science or engineering about their experiences in completing doctoral degree requirements. We go to administrators, faculty, and sponsors, but we don't ask students -- the main educational client -- what they make of what is happening to them. In particular, we are remiss with minority graduate students.

The need to communicate is self-evident. In 2004, fewer than 500 African American citizens and permanent residents earned Ph.D.'s in science and engineering fields, not even 1 percent of the total awarded. The numbers in some disciplines are so tiny as to defy sensibility: 17 in computer and information science, 13 in physics, 10 in mathematics, zero in astronomy. Today the science and engineering workforce -- like medicine, law, and business -- barely resembles the rest of America. The pattern for African Americans, observed for over half a century, is particularly bleak.

Last summer, I asked 40 minority doctoral candidates about their experiences in a "talk back" session at the annual meeting of the Graduate Scholars Program of the David and Lucile Packard Foundation. Since 1992, Packard Scholars have been selected from among the premier graduates of historically black colleges and universities.

The discussion confirmed that -- for these scholars at least -- those who do enter graduate programs in the sciences often face pressures not experienced by their non-minority colleagues. "It's not fun being a trailblazer in 2005," said one scholar, "because there are certain things we should not have to deal with. When you already have the responsibility and expectation of class work, nobody wants to carry the burden of the entire race and deal with issues that should have been resolved a long time ago."

Often, minority doctoral students in the sciences become PR spokespeople: "We are called upon to do a lot on diversity for the university. To sit on panels every time a black student is invited to the school ... to attend conferences, to take pictures for publications that show the diversity of the university. While we are doing these things, our counterparts are in the lab doing research and producing publications.... When a first-year student comes in, I want them to see another black face. But how do I maintain that research direction and focus? I have an extra burden not carried by my majority colleagues."

And while many students are supportive of diversity efforts, they cannot help but feel conflicted about the competitive realities facing science grads. "Yeah, I wanted to be a trailblazer," summarized one student, "but I also want the Nobel Prize in physics. I don't want to trail blaze in race relations at the university. I want to focus on my research and come up with a new laser treatment for cancer, that's my focus. I don't want to have to deal with the other stuff. Let me be me, let me shine, get your foot off of my neck, let me do my work."

The experiences voiced by the Packard Scholars are not unique. The AAAS Center for Advancing Science & Engineering Capacity was created to assist universities and colleges committed to improving the success of all students and faculty, especially those of color. The Packard Scholars reinforced much of what we've learned from our site visits, focus groups, and data reviews (for the center's approach, see this article). Their insights are noted here, many in the scholars' own voices.

Outreach must penetrate the academic reward system. As a faculty activity, outreach ranks a distant third behind research/entrepreneurship and teaching. Neither the faculty effort nor the outcome will change without institutional policies that restructure rewards. As one scholar put it, "Diversity will not be an issue until you start diving into their pockets, their budgets, because they'll do anything to get and keep their grants. But, if a university ... has all the money they need and new buildings, but they have never graduated an African American person, it's too easy to say, 'Oh, we don't know what to do, or we don't have the resources.' That's bull, because if you want the resources, you can get them."

And another remarked, "The program I selected had four African American graduates in the last 10 years. Two more are there now and another came in with me....  That makes a huge difference. Establish a great relationship with one student; make one happy and others will hear.... That is the easiest way to recruit because if they went to a black school, there are other students in their department who are looking for a good graduate program."

Gender and racial bias is a reality. Get over it -- with or without mentoring. The Packard Scholars report discrimination is alive and well in university programs: It ranges from negative comments in the lab about ability or preparation to the faculty's assumption that the only two black students in the department are going to work together. Some universities have developed mentoring or other support programs to mitigate the effects, while others let the problems go unattended.

Many students recommended that universities conduct diversity sensitivity training for the faculty. "That stops a lot of the comments and issues in the labs and in the classroom."

Still others found mentoring programs to be effective interventions. "I'm in medical school now [as an M.D./Ph.D. student], and there are institutionalized mechanisms designed with the philosophy that if we bring you to the school, it looks bad if we can't bring you to completion. Some of these or similar mechanisms, like 'big sib, little sib' mentoring situations can be implemented early. If you start to intervene after the first warning signs, these are still very much preventable problems. I think we would see a much improved attrition rate if we didn't wait until the problem is full blown -- a classic ounce of prevention is worth a pound of cure."

In situations lacking a formal infrastructure for dealing with discrimination, students devise their own. "Coming from [a historically black college/university] where the learning environment was more constructive, I was overlooked here several times because I was the only black in the class. I came up with strategies to cope. My best friend and I would intentionally split up ... so that we weren't in the same group.... We were able to survive because he would bring the information back to me and vice versa."

The student must focus on completing doctoral requirements. This form of accountability is a "performance contract" between student and major professor (if not one's dissertation committee). It reveals to the student the delicate balance of his/her endeavor: "When I started graduate school, the faculty taught us to work together, yet how to be competitive.... If I asked my advisor how to do something, he would guide me, but say 'You are different people, and I'm going to approach you at your level, so I may not ask you to do something that I ask your cohort to do because you are at a different place. But the results should be the same, because you are all here to get the Ph.D." 

All kinds of institutions can be "minority serving." If we examine the baccalaureate origins of African American Ph.D.'s and of Latino Ph.D.'s, historically black colleges and Hispanic-serving Institutions, respectively, are the largest producers. But Massachusetts Institute of Technology, Stanford University, and the University of California at Berkeley, among others, have distinguished records as producers of minority bachelor's graduates who go on to earn a doctorate in science or engineering. In addition, relative newcomers such as University of Maryland-Baltimore County and Louisiana State University are undergraduate models of student preparation for science-based Ph.D.'s. Some institutions, and often departments within institutions, clearly "get it." But decentralized authority at the graduate level ensures unevenness and lack of sharing of best practices.

New Ph.D.'s underestimate the skills they possess. The orientation of most graduate programs in the sciences is to a single sector or career pathway that represents immediate job opportunity, but little demand for versatility. Because the doctoral training process reproduces the past, (i.e., the traditions that fit an earlier time), it also reflects the biases and career of one's major professors. Consequently, the Ph.D. experience minimizes belief and understanding about skills beyond science fundamentals. The Capacity Center works with institutions to develop the skills required by 21st century organizations, academic and nonacademic alike:  teamwork, problem-solving, adaptation, communication, cultural competence.

This is about leadership -- the overarching need to grow leaders. For all the talk about the impact of mentors and role models, there will always be successful professional women and persons of color who will say, "It was tough for me and it's going to be tough for those who come behind me." These folks, irrespective of vintage or field, will not reach out. That's just the way they are -- making assumptions, suppressing memories of the help they received, and dealing with students their way. As one scholar noted, "Just think about how far the world has come in 10 years. Most of these cats [faculty] we're working for got their Ph.D. in the 1980s, 70s. The technology is moving way too fast and with the stuff that we know, we'll take their jobs. Some of them do everything they can to keep you from completing these programs, making it that much more difficult. The last thing they want to do is lose a job to you."

Change comes as new professionals ascend to positions that control resources and decisions. It may mean climbing the academic ladder or pursuing a nonacademic path. Both routes demonstrate that it's who you know plus what you know that matters -- not one or the other exclusively.  Who's in your network? Who talks to whom? The AAAS Capacity Center makes explicit these aspects of professional socialization and networking that can make a difference in a career.

The nation has invested in science and engineering since Sputnik -- a half century -- to advance its education, economic, workforce, and national security interests. When students are not recruited and nurtured to degree completion, we waste talent and material resources -- in defiance of student demographics and to the detriment of the nation's place in the world.  

Daryl E. Chubin
Author's email:

Daryl E. Chubin is director of the Center for Advancing Science & Engineering Capacity, at the American Association for the Advancement of Science.

Mark of Zotero

Zotero is a tool for storing, retrieving, organizing, and annotating digital documents. It has been available for not quite a year. I started using it about six weeks ago, and am still learning some of the fine points, but feel sufficient enthusiasm about Zotero to recommend it to anyone doing research online. If very much of your work involves material from JSTOR, for example – or if you find it necessary to collect bibliographical references, or to locate Web-based publications that you expect to cite in your own work -- then Zotero is worth knowing how to use. (You can install it on your computer for free; more on that in due course.)

Now, my highest qualification for testing a digital tool is, perhaps, that I have no qualifications for testing a digital tool. That is not as paradoxical as it sounds. The limits of my technological competence are very quickly reached. My command of the laptop computer consists primarily of the ability to (1) turn it on and (2) type stuff. This condition entails certain disadvantages (the mockery of nieces and nephews, for example) but it makes for a pretty good guinea pig.

And in that respect, I can report that the folks at George Mason University’s Center for History and New Media have done an exemplary job in designing Zotero. A relatively clueless person can learn to use it without exhaustive effort.

Still, it seems as if institutions that do not currently do so might want to offer tutorials on Zotero for faculty and students who may lack whatever gene makes for an intuitive grasp of software. Academic librarians are probably the best people to offer instruction. Aside from being digitally savvy, they may be the people at a university in the best position to appreciate the range of uses to which Zotero can be put.

For the absolute newbie, however, let me explain what Zotero is -- or rather, what it allows you to do. I’ll also mention a couple of problems or limitations. Zotero is still under development and will doubtless become more powerful (that is, more useful) in later releases. But the version now available has numerous valuable features that far outweigh any glitches.

Suppose you go online to gather material on some aspect of a book you are writing. In the course of a few hours, you might find several promising titles in the library catalog, a few more with Amazon, a dozen useful papers via JSTOR, and three blog entries by scholars who are thinking aloud about some matter tangential to your project.

How do you keep track of all this material? In the case of the JSTOR articles, you might download them to your laptop to read later. With material available only on Web pages, you can do a "screen capture" (provided you've learned the command for that) but might well end up printing them out, since otherwise it is impossible to highlight or annotate the text. As for the bibliographical citations, you can open a word-processing document and copy the references, one by one, or use note-taking software to do the same thing a little more efficiently.

In any case, you will end up with a number of kinds of digital files. They will be dispersed around your laptop in various places, organized as best you can. Gathering them is one thing; keeping track of them is another. And if you have a number of lines of research running at the same time (some of them distinct, some of them overlapping) then the problem may be compounded. Unless you have an excellent memory, or a very efficient note-taking regimen, it is easy to get swamped.

What Zotero does, in short, is solve most of these problems from the start -- that is, at the very moment you find a piece of material online and decide that it is worth keeping. You can organize material by subject, in whatever format. And it allows cross-referencing between the documents in ways that improve your ability to remember and use what you have unearthed.

For example, you can "grab" all the bibliographical data on a given monograph from the library catalog with a click, and save it in the same folder as any reviews of the book you've downloaded from JSTOR. If the author has a Web site with his recent conference papers, you can download them to the same project file just as easily.

This isn’t just bookmarking the page. You actually have the full text available and can read it offline. The ability to store and retrieve whole Web pages is especially valuable when no reliable archive of a site exists. I got a better sense of this from a conversation with Manan Ahmed, a fellow member of the group blog Cliopatria, who has been using Zotero while working on his dissertation at the University of Chicago. Articles he read from Indian newspapers online were sometimes up for only a short time, so he needed more than the URL to find them again. (He also mentions that Zotero can handle his bibliographical references better than other note-taking systems; it can store citations in Urdu or Arabic just as well as English.)

Furthermore, Zotero allows you to annotate any of the documents you hunt and gather. You can cross-reference texts from different formats -- linking a catalog citation to JSTOR articles, Web publications, and so on. If a specific passage you are reading stands out as important, it is possible to mark it with the digital equivalent of a yellow highlighter. And you can also add the marginal annotations, just like with a printout -- except without any limitation of space.

When the time comes to incorporate any of this material into a manuscript, Zotero allows you to export the citations, notes, and so forth into a word-processing document.

Zotero is what is called a “plug in” for the Firefox Mozilla Web browser. You can use it only with Firefox; it doesn’t work with Netscape or Internet Explorer. People who know such things tell me that Firefox is preferable to any other browser. Be that as it may, the fact that Zotero functions only with Firefox means you need to have Firefox installed first. Fortunately it, too, is free. (All the necessary links will be given at the end of this column.)

While you are online, using Firefox to look at websites, there is a Zotero button in the lower right hand corner of the browser. If something is worth adding to your files, you click the button to open the Zotero directory. This gives you the ability to download bibliographical information, webpages, digital texts, etc. and to organize them into folders you create. (If a given document might be of use to you in two different projects, it is easy to file it in two separate folders with a couple of clicks.)

Likewise, you use the Zotero button in Firefox to get access to your material when offline. Then you can read things you glanced over quickly at the library, add notes, and so forth.

I won't try to explain the steps involved in using Zotero’s various features. Prose is hardly the best way to do so, and in any case the Zotero website offers "screencasts” (little digital movies, basically) showing how things work. The most striking thing about Zotero is how well the designers have combined simplicity, power, and efficiency -- none of them qualities to be taken for granted with a digital research tool. (Here I am thinking of a certain note-taking software that cost me $200, then required printing out the 300 page user’s manual explaining the 15 steps involved in doing every damned thing.)

There is some room for improvement, however. All of the material gathered with Zotero is stored on the hard drive of whatever computer you happen to be using at the time. If you work with both a laptop and a computer at home, you can end up with two different sets of files. And of course the document you really need at a given moment will always be on the other system, per Murphy's law.

The optimal situation would be something closer to an e-mail system. That is, users would be able to get access to their files from any computer that had Web access. Material would be stored online (that is, on a server somewhere) and be available to the user by logging in.

Aside from the increased convenience to the individual user, making Zotero a completely Web-based instrument would have other benefits. The most important -- the development likely to have a significant impact on scholarship itself -- would be its ability to enhance collaborative work. Using a Zotero account as a hub, a community of researchers could share references, create new databases, and so on. And the more specialized the field of research, I suppose, the more powerful the effect.

All of which is supposed to be possible with Zotero 2.0, which is on the way. The release date is unclear at this point, though improved features of the existing version are rolled out periodically.

But for now, the folders you create on your laptop are stored there -- and remain unavailable elsewhere, unless you make a point to transfer them to another computer. This brings up the other serious problem. There does not seem to be a ready way to back up your Zotero files en masse. In the best case, there would be a command allowing you to export all of the material in Zotero to, say, a zip drive. Otherwise you can end up with huge masses of data, representing however many hours of exploration and annotation, and no easy way to protect it.

Perhaps it is actually possible to do so and I just can’t figure it out. But then, neither can the full-fledged member of the digerati who initiated me into Zotero. And so we both use it with a mingled sense of appreciation (this sure makes research more efficient!) and dread (what if the system crashes?)

For now, though, appreciation is by far the stronger feeling. Zotero does for research what word-processing software did for writing. After a short while, you start to wonder how anyone ever did without it.

If you don't already have Firefox 2.0 on your computer's desktop, you will need to download it before installing Zotero itself. Both are available here. The site also offers a great deal of information for anyone getting started with Zotero. Especially helpful are the “screencast tutorials” -- the next best thing to having a live geek to ask for help.

A good initial discussion of Zotero following its release last fall appeared at the Digital History Hacks blog. Also worth a look is this article.

"While clearly Zotero has a direct audience for citation management and research," according to another commentary, "the same infrastructure and techniques used by the system could become a general semantic Web or data framework for any other structured application." I am going to hope that is good news and not the sort of thing that leads to cyborgs traveling backward in time to destroy us all.


Scott McLemee
Author's email:

The Inadequacy of Increased Disclosure

In recent years there has been a strong push to attempt to regulate science by increasing disclosure of financial conflicts of interest (FCOI). As well-intentioned as this regulatory approach might be, it is based on flawed assumptions, poses the risk of becoming a self-perpetuating end in itself, and is a distraction from the underlying serious problem.”

It is hard to see how it could be possible that strengthened FCOI disclosures could have a significant effect since we know from the very reports from Sen. Charles Grassley’s Finance Committee that there is next to no enforcement. If the dog is all but toothless now, will someone unscrupulous hesitate much to game the system by simply not reporting? A clever operator will get a lawyer to guide such behavior. But this is hardly the extent of what we should be thinking about.

Conflict of interest rules are supposed to control corruption by recusing those with a financial stake. Corruption is the rational response in systems, so the mythical “rational players” will be corrupt. In political culture corruption is a given and conflict of interest rules have had some effect in legislatures of the USA. But science is not politics.

Scientific culture presumes honesty, but the data says that scientific fraud is large and growing. A recent study quite boggles the mind when one considers that some 9,000 papers were flagged for possible plagiarism with 212 of the first 212 being probable plagiarism on full examination, and to get into the running required substantial matches in the abstracts. It recently came to light that virtually an entire subfield in medicine was a fraud, though it is not clear if it was harmful. I will not belabor this, but basic sense tells us that if the dumbest kind of fraud is so widespread, we doubtless have serious problems elsewhere.

There is little punishment if one is caught in those rare cases when it is discovered. Looking at cases of scientific fraud, one finds that usually no charges are filed and authors don’t necessarily even withdraw their papers. When they do withdraw them, there is little facility for recording that fact, and papers can remain available in NIH databases and others without so much as a warning flag.

There is a European pilot project attempting to make a stab at the problem to mixed review. There is a Scifraud Web site as well that is similarly mixed. At worst, research privileges may be taken away. In one of the few cases I am aware of where charges were filed, the South Korea case, Hwang Woo-Suk was given a prestigious award despite currently standing trial for fraud and being unable to attend the ceremony for that reason. In other words, in science, a life of crime is easy -- at worst one gets a slap on the wrist. For those who commit frauds of various kinds, mostly one wins -- publications generate promotions, grants, etc.

Look at the situation objectively, and one must ask the question. Why bother doing real research if you can scout out what is probably true from some hard-working researchers with real data, then submit a paper that looks perfect "proving the hypothesis" with "all the latest techniques"? As we saw, simple plagiarism of the dumbest kind is probably endemic. There is software that can fake a gel, that can fake flow cytometry data, and one must assume that it is used. Call it “theft by perfection." We have no data at all on such fraud, but anecdotal evidence forming semi-random samples of significant size certainly suggests it occurs in certain areas of bioscience. So if there is great upside in science fraud – where’s the downside?

Perhaps one might be exposed, but even then, unless it’s really high profile, few people will know. Is the chance of being caught even as high as one in 5,000? Those thousands of fake papers say no, and instead suggest the chance of being caught may be less than one in 10,000 or more.

Given all of this, the rational response would be to face the scientific fraud problem head on rather than enact window dressing regulations, and I have a few proposals for how to do that.

The first regulatory change we need is to throw out the statute of limitations regulation that is set at six years. Folks, under current National Institutes of Health rules the case of the midwife toad would not have been exposed! Isn't that ridiculous? Scientists are (or should be) some of the better records keepers on the planet. Yes, records aren’t perfect, nor are memories, but mostly we have them around somewhere, or at least enough of them. We should remember also that scientists have been selected for superior memories and analytic abilities. In the context of science, graduate students are the people who usually find out about fraud first because they see exactly what is going on. The median time for a graduate student to awarding of a Ph.D. is six years; this is a time when they are extremely vulnerable to retaliation.

The second regulatory change concerns intra-university investigations. There is institutional collusion that whitewashes intra-university investigations unless a professor or dean takes up the cause. Flatly, these intra-university procedures don't work for graduate students and post-docs, and those who use them tend to find themselves pariahs. In this way our biosciences system has been systematically eliminating some of the most ethical and capable researchers in training who leave when subjected to retaliation. Keep your head down and don't rock the boat is the watchword in graduate school these days. I get the strong impression that those who went through grad school 30 years ago have little clue how bad it is. Ad hoc committees of arbitrarily chosen people who I believe are sometimes interfered with backstage by chancellors can exhibit phenomenally poor investigative skills when presented with claims. Those who serve on such committees are in a lose-lose position, and have no incentive to be there.

The only way they win is to curry favor with the administration.

Consequently, responsibility for academic misconduct complaints and whistle blower reports must be removed from the institutions in which they occur. I propose that such investigations be turned over to the Justice Department for investigation by an Office of Research Integrity moved into Justice for special adjucation. Researchers should be held personally liable, and they should be charged with criminal conduct, and efforts made to cut deals with them for fingering others in their loose circle. I strongly suspect that fraud appears in clusters linked by human networks in academia as it does elsewhere. Scientific fraud should be a criminal matter at the federal level if federal funds are used. Papers containing fraudulent data should be removed from federally funded databases and replaced with an abstract of the case and link to the case file.

Non-citizen students and post-docs are even more vulnerable to manipulation and extortion than citizens because of their dependence on their professor for a visa. This enables the unscrupulous to exert even more retaliatory power. I suspect the only cure for that is to grant a 10-year visa that will act like a time limited green card to deal with the problem. That way, at least non-citizens can vote with their feet and have some leeway to get away from an unscrupulous scientist.

But we can’t just improve what we do in response, although that is important. We also have to work hard to find our problems, so the third major area for improved regulation is to create scientific transparency. This will make it possible for other scientists to more easily detect fraudulent work. It should be required that data be made available within 12 months of collection to other researchers, on request. (There could be some variation depending on the kind of research.)

The researchers running the study could be given an embargo period of two years (or some other interval chosen by a reasonable formula) to publish based on their data, but there is no reason why other scientists shouldn't be able to see the data before publication during such an embargo. After publication, other researchers should be given access. It is transparency at the most fundamental level that is missing. Since court precedents have given researchers who receive government funds control over their data, because there was no other rule in place, the only way to improve data transparency is to mandate it. At the very least, base data should be released on demand after publication.

Dealing with these fundamentals will yield good results. Most researchers are innocent; they are guilty of little more than reluctance to get involved in the hard work of whistleblowing for no reward. Just tightening the straitjacket on researchers by giving them more hoops to jump through, and forcing them to recuse themselves from their own area of expertise because of financial rewards they earned by hard work will not prevent the unscrupulous from failing to report conflicts that nobody will find if they don’t report them. It will, instead, punish the ethical and financially harm them by taking away just rewards while having next to no impact on the unethical.

In its simplest restatement, science has a two-horned problem. On the one hand, there is an enforcement problem that exists because there is little chance of being caught in any 10-year period, and if one is caught the penalty is barely a slap on the wrist. This is exaggerated by the setting of statutes of limitations to coincide with the interval during which those most likely to find out are ensconced in a feudal serfdom holdover. On the other hand, sometimes huge rewards should legitimately accrue to people who spend their lives working very hard. Protecting such rewards is the entire purpose of our patent system which encourages innovation and the creation of new economic value.

In summary, we cannot fix the enforcement problem in scientific fraud by making it harder for the rewards to occur. We won't even raise the risk premium for fraud by any of the current rule changes proposed. We will, however, slow the pace of research by taking the best researchers off of problems they know best because they are forced to recuse themselves due to financial conflicts of interest.

Doing that, we will penalize researchers. We will also be penalizing top institutions by forcing them to step aside from involvement with furthering what has great economic value to the nation; because where there is conflict of interest, that means that value has been created. We need to attack the real problem head-on if we want to get good results and keep science respectable and economically most productive. The problem is simply scientific fraud.

Brian Hanley
Author's email:

Brian Hanley is an entrepreneur and analyst who recently completed a Ph.D. with honors at the University of California at Davis.

The Mild Torture Economy

In his mock documentary Take the Money and Run (1969), Woody Allen plays the ambitious but remarkably unlucky bank robber Virgil Starkwell. He never makes the FBI’s Ten Most Wanted because, after all, it all depends on who you know. But he does manage to shave some time off one of his prison sentences by volunteering for medical research. He survives the experiment. There is one side effect, however, as the narrator explains in a solemn voiceover: He is temporarily transformed into a rabbi.

This sequence came to mind while reading The Professional Guinea Pig, by Roberto Abadie, just published by Duke University Press. “An estimated 90 percent of drugs licensed before the 1970s were first tested on prisoners,” writes Abadie. “Prisoners were in many ways a perfect population for a controlled experiment. Because they had similar living conditions they provided good control groups for clinical trials, while the financial and material benefits ensured a large supply of willing and compliant volunteers.”

Only in 1980 did the Food and Drug Administration ban the use of prisoners for medical research. Their circumstances made a mockery of informed consent. (Especially in Virgil’s case. “Prisoners received one hot meal per day,” the narrator explains: “a bowl of steam.”) But the demand for experimental subjects for biomedical research had to be met somehow. And so there has emerged the new regime of power and knowledge analyzed by Abadie, a visiting scholar with the health sciences doctoral program at the City University of New York Graduate Center.

His book is an ethnographic account of the subculture of “paid volunteers” recruited to serve as subjects for pharmaceutical testing -- with a particular focus on what he calls the “professionalized” guinea pigs who derive most (or all) of their income from this work. Volunteers receive “from $1200 for three or four days in less intensive trials,” according to Abadie, “to $5000 for three or four weeks in more extensive ones.”

Actually the term “work” is somewhat problematic here. The labor is almost entirely passive. Half of it, as Woody Allen once said about life itself, is just showing up. You are weighed and your blood taken, and there might be a few other tests, along with quite a lot of boredom. (One of Abadie’s informants describes it as participation in “the mild torture economy.”) Some of the guinea pigs fall back on it as a supplement to “low-paying jobs as cooks, construction workers, house painters, or bike messengers.” For others, it is their sole source of income. They enlist for up to eight rounds of testing per year, earning “a total estimated income of $15,000 to $20,000 in exceptionally good years.”

Higher rates of pay are available to those willing to endure unpleasant procedures. Likewise, there is a premium for testing psychiatric drugs -- though the considered opinion of old-time guinea pigs is that you just don’t earn enough to make it worth letting someone mess with your brain chemistry.

Abadie’s description of the guinea-pig milieu -- based largely on interviews with a number of them living in a bohemian neighborhood in Philadelphia -- focuses on how they understand the risks involved in making a living this way, including their preferred means of recovering between rounds of exposure to “phase I” testing. (That is the term for clinical trials in which pharmaceuticals shown to have low toxicity when given to animals are tried on human subjects.) Various dietary regimens are thought to have a purifying effect. An informal network keeps participants updated on new opportunities in the human-subject market, and there used to be a zine called Guinea Pig Zero that still has a web presence.

Most of Abadie’s informants are also members of an anarchist counterculture that prides itself on remaining outside corporate capitalism. And making your living as a guinea pig is certainly different from joining the rat race. But the “mild torture economy” is well integrated into the larger and more literal economy. Testing is a necessary stage of pharmaceutical development, with some 80,000 phase I trials -- each involving 30 to 100 human subjects -- being run each year. The development of a pool of reliable but poorly paid “volunteers” (consisting mostly of young men who, as Abadie puts it, “use their bodies as ATMs to fund their lifestyles”) is one sign of the effect of deindustrialization on the labor market.

And the effect of becoming dependent on guinea-piggery as a source of income is that it creates an incentive to ignore the question of how exposure to experimental pharmaceuticals might affect you over the long run. “Beginners are more worried about risks than professionals,” notes Abadie. “Maybe this reflects the general population’s anxieties about biomedical research and its well-publicized abuses. Volunteers’ initial uneasiness focuses on the unknown effects of the drugs, but it also reflects a discomfort with a procedure they do not yet fully understand…. Some volunteers mentioned that they were somewhat concerned about developing cancer in the future.”

Not so, evidently, with those who had been through the process a few times: “Dependency on trial income, trial experiences that have not exposed them to side effects, and interactions with more experienced volunteers convinces newcomers that risks are not to be feared.” Just drink a couple of gallons of unsweetened cranberry juice and it’ll just wash the corporate technoscience right out of your system….

Meanwhile, the FDA “inspects less than 1 percent of all clinical trials in this country,” writes Abadie, and paid volunteers lack the resources to challenge any abuses they may suffer.

Trials in phases II and III -- when a drug is tested on patients suffering from the condition it may help treat -- draw on a different pool of human subjects, with motivations beyond that of payment. But when the subjects are economically vulnerable, as with some of the poor AIDS patients discussed in later chapters of Abadie's study, it compounds the ethical problems facing an institutional review board trying to assess whether the research has scientific merit or is driven instead by business interests.

The IRB in this case oversees the work of a small, community-based organization, not a university (where many clinical trials are conducted), but Abadie suggests that its ambivalence is commonplace. Its members "recognize the benefits that can derive from a relationship with the industry, but at the same time they fear that prospective financial gains can influence the research. These anxieties are reflected particularly in their views of the informed-consent process ... in which volunteers are supposed to be able to evaluate risks and benefits independently of other considerations."

The major weakness of this otherwise intriguing and worrying book is that it provides no clear sense of how typical the “professionalized” guinea pigs in Philadelphia may be -- and how central such repeat-performing volunteers are to the industry employing them.

Abadie maintains that a cohort of full-time human subjects emerged after the pool of prisoners dried up 30 years ago. The needs of the pharmaceutical industry led to the formation of “a group of reliable, knowledgeable, and willing subjects who depend on participation in trials for income to support themselves.” Okay, but just how dependent is the industry on them? What portion of the population of human research subjects for pharmaceutical research consists of such full-timers?

Invocations of “the new subjectivity required by neoliberal governmentality” may have their place in defining the situation. But hard numbers would be good, too. The fact that we don’t have them is part of the problem. But then there aren’t too many dimensions of the health care industry that don’t look like problems, right now.

Scott McLemee
Author's email:

Putting the 'Humanities' in 'Digital Humanities'

Reflecting on the recent The Humanities and Technology conference (THAT Camp) in San Francisco, what strikes me most is that digital humanities events consistently tip more toward the logic-structured digital side of things. That is, they are less balanced out by the humanities side. But what I mean by that itself has been a problem I've been mulling for some time now. What is the missing contribution from the humanities?

I think this digital dominance revolves around two problems.

The first is an old problem. The humanities’ pattern of professional anxiety goes back to the 1800s and stems from pressure to incorporate the methods of science into our disciplines or to develop our own, uniquely humanistic, methods of scholarship. The "digital humanities" rubs salt in these still open wounds by demonstrating what cool things can be done with literature, history, poetry, or philosophy if only we render humanities scholarship compliant with cold, computational logic. Discussions concern how to structure the humanities as data.

The showy and often very visual products built on such data and the ease with which information contained within them is intuitively understood appear, at first blush, to be a triumph of quantitative thinking. The pretty, animated graphs or fluid screen forms belie the fact that boring spreadsheets and databases contain the details. Humanities scholars, too, often recoil from the presumably shallow grasp of a subject that data visualization invites.

For many of us trained in the humanities, to contribute data to such a project feels a bit like chopping up a Picasso into a million pieces and feeding those pieces one by one into a machine that promises to put it all back together, cleaner and prettier than it looked before.

Which leads to the second problem, the difficulty of quantifying an aesthetic experience and — more often — the resistance to doing so. A unique feature of humanities scholarship is that its objects of study evoke an aesthetic response from the reader (or viewer). While a sunset might be beautiful, recognizing its beauty is not critical to studying it scientifically. Failing to appreciate the economy of language in a poem about a sunset, however, is to miss the point.

Literature is more than the sum of its words on a page, just as an artwork is more than the sum of the molecules it comprises. To itemize every word or molecule on a spreadsheet is simply to apply more anesthetizing structure than humanists can bear. And so it seems that the digital humanities is a paradox, trying to combine two incompatible sets of values.

Yet, humanities scholarship is already based on structure: language. "Code," the underlying set of languages that empowers all things digital, is just another language entering the profession. Since the application of digital tools to traditional humanities scholarship can yield fruitful results, perhaps what is often missing from the humanities is a clearer embrace of code.

In fact, "code" is a good example of how something that is more than the sum of its parts emerges from the atomic bits of text that logic demands must be lined up next to each other in just such-and-such a way. When well-structured code is combined with the right software (e.g., a browser, which itself is a product of code), we see William Blake’s illuminated prints, or hear Gertrude Stein reading a poem, or access a world-wide conversation on just what is the digital humanities. As the folks at WordPress say, code is poetry.

I remember 7th-grade homework assignments programming onscreen fireworks explosions in BASIC. When I was in 7th grade, I was willing to patiently decipher code only because of the promise of cool graphics on the other end. When I was older, I realized the I was willing to read patiently through Hegel and Kant because I learned to see the fireworks in the code itself. To avid readers of literature, the characters of a story come alive to us, laying bare our own feelings or moral inclinations in the process.

Detecting patterns, interpreting symbolism, and analyzing logical inconsistencies in text are all techniques used in humanities scholarship. Perhaps the digital humanities' greatest gift to the humanities can be the ability to invest a generation of "users" in the techniques and practiced meticulous attention to detail required to become a scholar.

Phillip Barron
Author's email:

Trained in analytic philosophy, Phillip Barron is a digital history developer at the University of California at Davis.


Subscribe to RSS - Research
Back to Top