Killing Peer Review

Smart Title: 
Can the social Web produce a "killer app" that would do away with the traditional editorial process at scholarly journals?

A Nudge for the Neediest

Smart Title: 
Study of impact of need-based financial aid finds that extra grants helped the students least likely to succeed, but did little to boost overall academic progress.

We're All Getting Better

Universities celebrate their achievements in an endless series of public pronouncements. Like the imaginary residents of Lake Wobegon, all universities are above average, all are growing, and all improve. In most cases, these claims of progress rest on a technically accurate foundation:   Applications did increase, the average SAT scores did rise, the amount of financial aid did climb, private gifts did spike upward, and faculty research funding did grow.

No sensible friend of the institution wants to spoil the party by putting these data points of achievement into any kind of comparative context. There is little glory in a reality check.

Still, the overblown claims of achievement often leave audiences wondering how all these universities can be succeeding so well and at the same time appear before their donors and legislators, not to mention their students, in a permanent state of need. This leads to skepticism  and doubt, neither of which is good for the credibility of university people.  It also encourages trustees and others to have unrealistic expectations about the actual growth processes of their institutions.

For example, while applications at a given institution may be up, and everyone cheers, the total pool of applicants for all colleges and universities may be up also.  If a college's number for the years 1998 to 2002 is up by 10 percent, it may nonetheless have lost ground since the number of undergraduate students attending college nationally grew by 15 percent in the same period. Growth is surely better than decline, but growth relative to the marketplace for students signals real achievement.

Similar issues affect such markers as test scores. If SAT scores for the freshman class rise by eight points the admissions office should be pleased, but if nationally, among all students, test scores went up by nine points ( as they did between 1998 and 2004 ) the college may have lost ground relative to the marketplace.

An actual example with real data may help. Federal research expenditures provide a key indicator of competitive research performance. Universities usually report increases in this number with pride, and well they should because the competition is fierce. A quick look at the comparative numbers can give us a reality check on whether an increase actually represents an improvement relative to the marketplace. 

Research funding from federal sources is a marketplace of opportunity defined by the amount appropriated to the various federal agencies and the amount they made available to colleges and universities. The top academic institutions control about 90 percent of this pool and compete intensely among themselves for a share. This is the context for understanding the significance of an increase in federal research expenditures.

A review of the research performance of the top 150 institutions reporting federal research expenditures clarifies the meaning of the growth we all celebrate ( TheCenter, 2004). The total pool of dollars captured by these top competitors grew by about 14 percent from 2001 to 2002. While almost all institutions saw an increase in their research performance over this short time, a little over half (88 institutions) met or exceeded the growth of the pool. Almost all the others also increased their research expenditures, but even so, they lost market share to their colleagues in the top 150.

If we take a longer-range perspective, using the data between 1998 and 2002, the pool of funds spent from federal sources by our 150 institutions grew by 45 percent. For a university to keep pace, it would need to grow by 45 percent as well over the same period. Again, about half of our 150 institutions (80) managed to improve by at least this growth rate.  Almost all the remaining institutions also improved over this longer period, but not by enough to stay even with the growth of opportunity.

Even comparative data expressed in percentages can lead us into some confused thinking.  We can imagine that equal percentage growth makes us equally competitive with other universities that have the same percentage growth. This is a charming conceit, but misrepresents the difficulty of the competition.  

At the top of the competition, Johns Hopkins University would need to capture a sufficient increase in federal grants to generate additional spending of over $123 million a year just to stay even with the average total increase from 2001 to 2002 (it did better than that, with 16 percent growth). The No. 150 research university in 2001, the University of Central Florida, would need just over $3 million to meet the 14 percent increase in the total pool.  However, UCF did much better than that, growing by a significant 36 percent. 

Does this mean UCF is outperforming Hopkins? Of course not. JHU added $142 million to its expenditures while UCF added $7.6 million.  

The lesson here, as my colleague at the system office of the State University of New York, Betty Capaldi, reminded me when she suggested this topic, is that we cannot understand the significance of a growth number without placing it within an appropriate comparative context or understanding the relative significance of the growth reported.  

It may be too much to ask universities to clarify the public relations spin that informs their communications with the public, but people who manage on spin usually make the wrong choices.

John V. Lombardi
Author's email:

The Inadequacy of Increased Disclosure

In recent years there has been a strong push to attempt to regulate science by increasing disclosure of financial conflicts of interest (FCOI). As well-intentioned as this regulatory approach might be, it is based on flawed assumptions, poses the risk of becoming a self-perpetuating end in itself, and is a distraction from the underlying serious problem.”

It is hard to see how it could be possible that strengthened FCOI disclosures could have a significant effect since we know from the very reports from Sen. Charles Grassley’s Finance Committee that there is next to no enforcement. If the dog is all but toothless now, will someone unscrupulous hesitate much to game the system by simply not reporting? A clever operator will get a lawyer to guide such behavior. But this is hardly the extent of what we should be thinking about.

Conflict of interest rules are supposed to control corruption by recusing those with a financial stake. Corruption is the rational response in systems, so the mythical “rational players” will be corrupt. In political culture corruption is a given and conflict of interest rules have had some effect in legislatures of the USA. But science is not politics.

Scientific culture presumes honesty, but the data says that scientific fraud is large and growing. A recent study quite boggles the mind when one considers that some 9,000 papers were flagged for possible plagiarism with 212 of the first 212 being probable plagiarism on full examination, and to get into the running required substantial matches in the abstracts. It recently came to light that virtually an entire subfield in medicine was a fraud, though it is not clear if it was harmful. I will not belabor this, but basic sense tells us that if the dumbest kind of fraud is so widespread, we doubtless have serious problems elsewhere.

There is little punishment if one is caught in those rare cases when it is discovered. Looking at cases of scientific fraud, one finds that usually no charges are filed and authors don’t necessarily even withdraw their papers. When they do withdraw them, there is little facility for recording that fact, and papers can remain available in NIH databases and others without so much as a warning flag.

There is a European pilot project attempting to make a stab at the problem to mixed review. There is a Scifraud Web site as well that is similarly mixed. At worst, research privileges may be taken away. In one of the few cases I am aware of where charges were filed, the South Korea case, Hwang Woo-Suk was given a prestigious award despite currently standing trial for fraud and being unable to attend the ceremony for that reason. In other words, in science, a life of crime is easy -- at worst one gets a slap on the wrist. For those who commit frauds of various kinds, mostly one wins -- publications generate promotions, grants, etc.

Look at the situation objectively, and one must ask the question. Why bother doing real research if you can scout out what is probably true from some hard-working researchers with real data, then submit a paper that looks perfect "proving the hypothesis" with "all the latest techniques"? As we saw, simple plagiarism of the dumbest kind is probably endemic. There is software that can fake a gel, that can fake flow cytometry data, and one must assume that it is used. Call it “theft by perfection." We have no data at all on such fraud, but anecdotal evidence forming semi-random samples of significant size certainly suggests it occurs in certain areas of bioscience. So if there is great upside in science fraud – where’s the downside?

Perhaps one might be exposed, but even then, unless it’s really high profile, few people will know. Is the chance of being caught even as high as one in 5,000? Those thousands of fake papers say no, and instead suggest the chance of being caught may be less than one in 10,000 or more.

Given all of this, the rational response would be to face the scientific fraud problem head on rather than enact window dressing regulations, and I have a few proposals for how to do that.

The first regulatory change we need is to throw out the statute of limitations regulation that is set at six years. Folks, under current National Institutes of Health rules the case of the midwife toad would not have been exposed! Isn't that ridiculous? Scientists are (or should be) some of the better records keepers on the planet. Yes, records aren’t perfect, nor are memories, but mostly we have them around somewhere, or at least enough of them. We should remember also that scientists have been selected for superior memories and analytic abilities. In the context of science, graduate students are the people who usually find out about fraud first because they see exactly what is going on. The median time for a graduate student to awarding of a Ph.D. is six years; this is a time when they are extremely vulnerable to retaliation.

The second regulatory change concerns intra-university investigations. There is institutional collusion that whitewashes intra-university investigations unless a professor or dean takes up the cause. Flatly, these intra-university procedures don't work for graduate students and post-docs, and those who use them tend to find themselves pariahs. In this way our biosciences system has been systematically eliminating some of the most ethical and capable researchers in training who leave when subjected to retaliation. Keep your head down and don't rock the boat is the watchword in graduate school these days. I get the strong impression that those who went through grad school 30 years ago have little clue how bad it is. Ad hoc committees of arbitrarily chosen people who I believe are sometimes interfered with backstage by chancellors can exhibit phenomenally poor investigative skills when presented with claims. Those who serve on such committees are in a lose-lose position, and have no incentive to be there.

The only way they win is to curry favor with the administration.

Consequently, responsibility for academic misconduct complaints and whistle blower reports must be removed from the institutions in which they occur. I propose that such investigations be turned over to the Justice Department for investigation by an Office of Research Integrity moved into Justice for special adjucation. Researchers should be held personally liable, and they should be charged with criminal conduct, and efforts made to cut deals with them for fingering others in their loose circle. I strongly suspect that fraud appears in clusters linked by human networks in academia as it does elsewhere. Scientific fraud should be a criminal matter at the federal level if federal funds are used. Papers containing fraudulent data should be removed from federally funded databases and replaced with an abstract of the case and link to the case file.

Non-citizen students and post-docs are even more vulnerable to manipulation and extortion than citizens because of their dependence on their professor for a visa. This enables the unscrupulous to exert even more retaliatory power. I suspect the only cure for that is to grant a 10-year visa that will act like a time limited green card to deal with the problem. That way, at least non-citizens can vote with their feet and have some leeway to get away from an unscrupulous scientist.

But we can’t just improve what we do in response, although that is important. We also have to work hard to find our problems, so the third major area for improved regulation is to create scientific transparency. This will make it possible for other scientists to more easily detect fraudulent work. It should be required that data be made available within 12 months of collection to other researchers, on request. (There could be some variation depending on the kind of research.)

The researchers running the study could be given an embargo period of two years (or some other interval chosen by a reasonable formula) to publish based on their data, but there is no reason why other scientists shouldn't be able to see the data before publication during such an embargo. After publication, other researchers should be given access. It is transparency at the most fundamental level that is missing. Since court precedents have given researchers who receive government funds control over their data, because there was no other rule in place, the only way to improve data transparency is to mandate it. At the very least, base data should be released on demand after publication.

Dealing with these fundamentals will yield good results. Most researchers are innocent; they are guilty of little more than reluctance to get involved in the hard work of whistleblowing for no reward. Just tightening the straitjacket on researchers by giving them more hoops to jump through, and forcing them to recuse themselves from their own area of expertise because of financial rewards they earned by hard work will not prevent the unscrupulous from failing to report conflicts that nobody will find if they don’t report them. It will, instead, punish the ethical and financially harm them by taking away just rewards while having next to no impact on the unethical.

In its simplest restatement, science has a two-horned problem. On the one hand, there is an enforcement problem that exists because there is little chance of being caught in any 10-year period, and if one is caught the penalty is barely a slap on the wrist. This is exaggerated by the setting of statutes of limitations to coincide with the interval during which those most likely to find out are ensconced in a feudal serfdom holdover. On the other hand, sometimes huge rewards should legitimately accrue to people who spend their lives working very hard. Protecting such rewards is the entire purpose of our patent system which encourages innovation and the creation of new economic value.

In summary, we cannot fix the enforcement problem in scientific fraud by making it harder for the rewards to occur. We won't even raise the risk premium for fraud by any of the current rule changes proposed. We will, however, slow the pace of research by taking the best researchers off of problems they know best because they are forced to recuse themselves due to financial conflicts of interest.

Doing that, we will penalize researchers. We will also be penalizing top institutions by forcing them to step aside from involvement with furthering what has great economic value to the nation; because where there is conflict of interest, that means that value has been created. We need to attack the real problem head-on if we want to get good results and keep science respectable and economically most productive. The problem is simply scientific fraud.

Brian Hanley
Author's email:

Brian Hanley is an entrepreneur and analyst who recently completed a Ph.D. with honors at the University of California at Davis.

Putting the 'Humanities' in 'Digital Humanities'

Reflecting on the recent The Humanities and Technology conference (THAT Camp) in San Francisco, what strikes me most is that digital humanities events consistently tip more toward the logic-structured digital side of things. That is, they are less balanced out by the humanities side. But what I mean by that itself has been a problem I've been mulling for some time now. What is the missing contribution from the humanities?

I think this digital dominance revolves around two problems.

The first is an old problem. The humanities’ pattern of professional anxiety goes back to the 1800s and stems from pressure to incorporate the methods of science into our disciplines or to develop our own, uniquely humanistic, methods of scholarship. The "digital humanities" rubs salt in these still open wounds by demonstrating what cool things can be done with literature, history, poetry, or philosophy if only we render humanities scholarship compliant with cold, computational logic. Discussions concern how to structure the humanities as data.

The showy and often very visual products built on such data and the ease with which information contained within them is intuitively understood appear, at first blush, to be a triumph of quantitative thinking. The pretty, animated graphs or fluid screen forms belie the fact that boring spreadsheets and databases contain the details. Humanities scholars, too, often recoil from the presumably shallow grasp of a subject that data visualization invites.

For many of us trained in the humanities, to contribute data to such a project feels a bit like chopping up a Picasso into a million pieces and feeding those pieces one by one into a machine that promises to put it all back together, cleaner and prettier than it looked before.

Which leads to the second problem, the difficulty of quantifying an aesthetic experience and — more often — the resistance to doing so. A unique feature of humanities scholarship is that its objects of study evoke an aesthetic response from the reader (or viewer). While a sunset might be beautiful, recognizing its beauty is not critical to studying it scientifically. Failing to appreciate the economy of language in a poem about a sunset, however, is to miss the point.

Literature is more than the sum of its words on a page, just as an artwork is more than the sum of the molecules it comprises. To itemize every word or molecule on a spreadsheet is simply to apply more anesthetizing structure than humanists can bear. And so it seems that the digital humanities is a paradox, trying to combine two incompatible sets of values.

Yet, humanities scholarship is already based on structure: language. "Code," the underlying set of languages that empowers all things digital, is just another language entering the profession. Since the application of digital tools to traditional humanities scholarship can yield fruitful results, perhaps what is often missing from the humanities is a clearer embrace of code.

In fact, "code" is a good example of how something that is more than the sum of its parts emerges from the atomic bits of text that logic demands must be lined up next to each other in just such-and-such a way. When well-structured code is combined with the right software (e.g., a browser, which itself is a product of code), we see William Blake’s illuminated prints, or hear Gertrude Stein reading a poem, or access a world-wide conversation on just what is the digital humanities. As the folks at WordPress say, code is poetry.

I remember 7th-grade homework assignments programming onscreen fireworks explosions in BASIC. When I was in 7th grade, I was willing to patiently decipher code only because of the promise of cool graphics on the other end. When I was older, I realized the I was willing to read patiently through Hegel and Kant because I learned to see the fireworks in the code itself. To avid readers of literature, the characters of a story come alive to us, laying bare our own feelings or moral inclinations in the process.

Detecting patterns, interpreting symbolism, and analyzing logical inconsistencies in text are all techniques used in humanities scholarship. Perhaps the digital humanities' greatest gift to the humanities can be the ability to invest a generation of "users" in the techniques and practiced meticulous attention to detail required to become a scholar.

Phillip Barron
Author's email:

Trained in analytic philosophy, Phillip Barron is a digital history developer at the University of California at Davis.

Pinching Pennies on Research

Smart Title: 
The Bush budget plan would provide small increases for the NIH and NSF, while cutting spending on other programs.

Brawl at Brown Over Who Owns Research

Smart Title: 
Brown moves to assert more rights to faculty inventions, and some professors are unhappy.

Standing Up by Sitting Out

Smart Title: 
Scientists are boycotting Kansas hearings to debate the pros and cons of evolution.

Guidance on Stem Cells

Smart Title: 
The National Academies issued guidelines for scientists conducting the research.

Microsoft's Academic Agenda

Smart Title: 

Over the last 18 months, Microsoft has shifted its support for academic research to involve more universities and more kinds of studies. The shifts come at a time that Bill Gates, the company's founder, has become increasingly concerned about declines in support for key research agencies and declining student interest in computer science.


Subscribe to RSS - Research
Back to Top