You have /5 articles left.
Sign up for a free account or log in.

Research competitiveness and productivity are complex subjects that should inform the development and oversight of R&D programs at the national, state and institutional levels. From a national policy perspective, studies of our national innovation ecosystem – of the factors that promote discovery and innovation – are important to America’s economic vitality.

Ironically, rather than advance our knowledge and discussion of these important topics, many university presidents seem more inclined to debate the shortcomings of available measures such as the rankings of U.S. News & World Report, sometimes even threatening to boycott the surveys. What is more, these same presidents defend the absence of adequate measurements of institutional performance by saying that the strength of American higher education lies in the diversity of its institutions. So why not develop a framework that characterizes institutional variety and demonstrates productivity understandably, effectively and broadly throughout the spectrum of our institutions?

Of course, it is not easy to characterize the wide range of America’s more than 3,500 colleges and universities. Even among the more limited number of research universities, institutional diversity is so broad that every approach to rank or even classify institutions has been rightly criticized. Most research rankings use only input measures, such as amount of federal funding or total expenditures for research, when funding agencies would be served better by information about outcomes -- the research performance of universities.

The Center for Measuring University Performance, founded by John Lombardi, has compiled some of the most comprehensive data on research universities. Its annual studies examine the multi-dimensional aspects of research universities and rank them in groups defined by relative performance on various measurable characteristics -- research funding, private giving, faculty awards, doctorate degrees, postdoctoral fellows and measures of undergraduate student quality.

The 2005 report of the Center and a recent column on this site by Lombardi note the upward or downward skewing of expenditure rankings by the mere presence or absence of either a medical or an engineering school, thereby acknowledging the problems of comparability among institutions. Lombardi hints at a much-needed analysis of research competitiveness/strengths and productivity, stating, “Real accountability comes when we develop specific measures to assess the performance of comparable institutions on the same measures.”

Indeed, a particularly thorny question always has been how to create meaningful comparisons between large and smaller research universities, or even between specific research programs within universities. This struggle seems to arise in part from the fundamental question that underlies the National Science Foundation rankings -- namely, should winning or expending more research dollars be the only criterion for a higher ranking? I think not. Quite simply, in the absence of output measures, the more-is-better logic is flawed. If research productivity is equal, why should a university that spends more money for research be ranked higher than one that spends less? The sizes of research budgets alone do not create equally productive outcomes. Other contributing factors need to be considered. For example, some universities have much larger licensing revenues than those with comparable research budgets, and all surveys that measure licensing revenues compared to research income show no correlation, especially when scaled.

Because there are no established frameworks to get at the various factors that are likely involved, I think a good beginning would be to characterize research competitiveness and productivity separately.

Research competitiveness:

Because available R&D dollars vary widely by agency and field of research, and because universities do not have uniform research strengths, I suggest that portfolio analyses of research funding need to be performed. A given university’s research portfolio can be described, quantified and weighed against the percentage of funding available from each federal agency and, when possible, by the sub-areas of research supported by each agency. For example, the upward skewing of rankings is partially explained by the fact that 70 percent of all federal funding is directed at biomedical research. Likewise, the U.S. Department of Agriculture funds only 3 percent of federal research, but provides virtually all of such funds to land grant universities.

Analyses should focus on federal obligations for R&D, rather than total expenditures, because federal obligations are by-and-large competitively awarded and thus come closest to demonstrating competitiveness. Available data, however, present various challenges. For example, some federal funding that supports activities other than research will need to be excluded from analyses (e.g., large contracts that give universities management of support programs). Also, data are available only at the macro level of disciplines, such as engineering versus life sciences, which means that detailed distinctions between research areas will be difficult to achieve.

In addition, I submit that research competitiveness can only be demonstrated when one university's research portfolio is growing faster than those of other comparable universities, or faster than the rate at which federal funding itself is growing. I call this a “percent growth” comparison and think that, although formally equivalent, it is intuitively easier to understand than the “market share” approach used by Roger Geiger in his 1993 book, Research and Relevant Knowledge: American Research Universities Since World War II. Geiger’s 2004 book, Knowledge and Money: Research Universities and the Paradox of the Marketplace, clearly demonstrates how some universities have gained while others have lost their competitive positions in federally funded research over the years.

Ideally, if the data were available, research strengths should be examined over time at the micro level, by sub-disciplines or by areas of emphasis. For example, because growth in agency budgets has not occurred uniformly across agencies or over time, it would be instructive to note how portfolio shares change over time and how a particular university has fared in specific research areas. Battelle’s Technology Practice has used new tools for the graphical representation of research portfolios to draw some interesting conclusions about how some universities are linked to industrial clusters.

Productivity:

Relative growth is not enough because it begs the question of productivity to scale. Unfortunately, scaled research productivity data are scarce. Two seldom-mentioned sources are, however, available.

First, there is the 1997 book, The Rise of American Research Universities: Elites and Challengers in the Postwar Era, in which Hugh Davis Graham and Nancy Diamond offer new analyses, including comparisons scaled by faculty size. Their approach yields per-faculty productivity data of (1) research funding, (2) publications and (3) comparisons between private and public universities. Although the data are now dated and others have found difficulty with information on faculty size and faculty roles, I believe that the methodology employed by Graham and Diamond is worth revisiting, refining and building upon.

Second, there are the annual surveys from the Association of University Technology Managers that scale productivity in terms of output per million dollars of research activity. The AUTM data look at measurable outputs such as disclosures, patents, licenses and new company startups. Although some of these data are subject to analytical problems of their own, it is notable that the institutions that emerge as the most productive are not those at the top of the NSF rankings. More recently, the Milken Institute has begun using the AUTM data to probe the free market system as related to university research.

Beyond competitiveness and productivity:

The research competitiveness and productivity analyses discussed above are modest suggestions to improve upon the commonly used and all too simplistic more-is-better approach of the NSF rankings. Still, if we are actually to improve our analytical framework so as to advance the R&D policy debate, we will need to develop more sophisticated tools.

For example, in the productivity domain and in regard to determining how one piece of research interacts with another, scaled comparisons could also be generated by measuring per-faculty citations and their relationship to other publications. Here, I think that a good start could be by way of the various citation indices published by the Institute for Scientific Information and through the newer Faculty Scholarly Productivity Index. None of these indices has been, to my knowledge, related to funding data, which presents an intriguing opportunity.

An issue not yet addressed by either productivity or competitiveness measures is that of tracking intellectual property flows. How can we begin to trace the flows of ideas and new technologies generated by universities? This question might benefit from the kind of cluster analysis of citations first pioneered by ISI when it “discovered” the emergence of the new field of immunology. The patent data base would be another resource that could be brought to this task. Indeed, my colleague Gary Markovits, founder and CEO of Innovation Business Partners, has developed new processes and search tools that improve the hit-rate of patent data base searches, and he has worked with the Office of Naval Research on ways to accelerate the rate of innovation at their laboratories. Universities and other federal laboratories would do well to consider some of these approaches.

The public and Congress are now clamoring for accountability in higher education, just as they are with regard to health care, and while the college accountability discussions focus on undergraduate education, it won’t be long before they spread to research spending. No longer can we simply assert that adequate and comparable measurements are impossible, expecting the public to blindly trust that we in the academy know quality when we see it. As scholars and researchers, we can and must do better. Otherwise, the predictable result will be public distrust that fails to sustain even the current levels of federal R&D investments.

Next Story

More from Views