New data from a company that ranks research universities primarily based on per-capita faculty productivity  suggest that there are consistent and dramatic disparities in research output at public and private institutions. Lawrence B. Martin, chief scientific consultant for Academic Analytics  and dean of the graduate school at the State University of New York at Stony Brook, said the data provide evidence that a phenomenon projected for decades has now in fact arrived: As top private institutions grow richer, public universities have found themselves unable to compete on the resource front, and so the competitive edge enjoyed by private institutions has ultimately spilled into research and all that results (journal articles, grants and books).
But many administrators and higher education researchers caution that while the thrust of the data from Academic Analytics might confirm sneaking suspicions (and even conventional wisdom in some quarters), the results should not be read too broadly to reflect a significant shift in the public/private balance. Some individuals strongly criticized the company's methodology, while many who had little to say specifically about the two-year-old, for-profit entity ’s data said they’d be awaiting this year’s planned release of the National Research Council’s updated departmental ratings  – still the gold standard for assessing doctoral education – to see if they confirm what Academic Analytics found.
“Before drawing firm conclusions, I would wait to see what the National Research Council’s rankings say,” said Peter Lee, vice provost for research at Carnegie Mellon University. (The private institution performed exceedingly well in the Academic Analytics 2005 rankings, landing in the No. 6 spot among large research universities .) “If they affirm this, I think that there would be an interesting discussion to have.”
Martin said the disparities in research output can be found at both an institutional and departmental level. Institutionally, only one public university (the highly specialized University of California at San Francisco) cracks Academic Analytics’s list of top 10 large research universities in 2005 . Meanwhile, the elite publics -- among them the University of California at Berkeley and the flagships in Wisconsin, Washington, Virginia, North Carolina and Michigan -- are ranked 13, 14, 17, 18, 25 and 27 in the rankings, released in January and meant to measure faculty research productivity.
On a departmental level, newly released statistics from Academic Analytics on 33 selected disciplines also point to gaps in research output. Average rankings of departmental programs from 2003-4, 2004-5 and 2005-6, based on the company’s " Faculty Scholarly Productivity Index ,” which takes into account grants received, awards bestowed, and books and journal articles published, show that programs at private institutions are consistently ranked higher than corresponding programs at public universities.
For instance, among all institutions with a Ph.D. program in biochemistry, the average rank for a biochemistry program at a private institution is 61.37, compared to 87 for the average public university program. Low rankings indicate higher research output; high rankings indicate lower output, and the number of institutions compared varies by the number of Ph.D. programs offered in a particular discipline. In accounting, the average private program is ranked 8.17 and the average public program 21.63; in electrical engineering, it’s 61.78 for privates and 79.85 for publics; in philosophy, it’s 43.63 for departments at private universities and 54.71 for programs at public ones . In total, Martin said, 354 American Ph.D.-granting institutions were identified.
“It’s a cause for concern,” said Martin, who attributed the disparities to growing gaps in resources (large-scale investments in new research, conference support and the funds to hire staffers to help write proposals and "enable the scientists to get on with the science," to name a few examples).
“There are only two interpretations: One is that the faculty at public universities are less good than the faculty at private institutions, on average. Most people wouldn’t accept that as a premise," Martin said. "The second is that the conditions under which we’re working are not as good as those offered by the private universities, so we’re not maximizing our potential impact in terms of science and technology. From a national perspective, in terms of competing internationally, you want everyone running at full speed."
But others aren’t so sure the results unequivocally suggest that one of those two conclusions must be true. A couple of people contacted went so far as to say that the research, based on what they consider to be Academic Analytics’s flawed methodology, constitutes little more than a publicity stunt by a profit-seeking company.
Others pointed to differentiated missions of public and private institutions and diverse ways of assessing success, while still others said the results may signal an early warning sign of widening disparities between public and private institutions that may become undeniably apparent when the National Research Council rankings are released at the end of the year. (Jim Voytuk, senior program officer for the National Research Council, said that at this point it has not collected the data to either refute or support Academic Analytics’s conclusions. It will be interesting to see, he said, whether they will match.)
“If you asked me what do I think is going to happen to the rankings of public institutions in the upcoming National Research Council survey, I would say, 'I think they’re going to go down,' ” said Ronald G. Ehrenberg, director of the Cornell Higher Education Research Institute .
“There have been declining salaries in public education relative to private education, and because of the resource constraints in public higher education, increasingly, the costs of research are being borne by universities themselves out of internal funds. And most public universities don’t have the flow of endowment income and annual giving that is needed to help support research,” Ehrenberg said.
He pointed out that the most recent NRC ratings, from 1995 (in which public universities fared better, comparatively, than they did in the new Academic Analytics rankings), were in large part based on reputation in addition to the types of measurable outcomes Academic Analytics relies upon. With the NRC's shift toward quantitative data , the picture could prove to be somewhat different this time around. But even in the 1995 NRC numbers, clear gaps in publication history could be observed when examined in isolation of other factors, said Lawrence G. Abele, the provost at Florida State University, which is one of 44 institutions that contracts with Academic Analytics.
“I was really struck by how consistent it was across the 47 disciplines that the National Research Council did at that time; publications for faculty at private universities were just significantly ahead,” said Abele, who analyzed the council's data. “It largely is a resource issue.”
John Cheslock, assistant professor in the Center for the Study of Higher Education  at the University of Arizona, added that he would predict that any resource gap would only continue to grow. He pointed out, however, that a comparison of average rankings between public and private institutions could be misleading, as many of the public Ph.D.-granting institutions that would be considered are non-flagships that emphasize student access. "There's a lot more depth among publics," he said.
But Elizabeth D. Capaldi, executive vice president and provost at Arizona State University, said that the apparent accuracy of Academic Analytics’s findings doesn’t change the fact that they are derived from a poor data set obtained by poor methodology. “You can make good points regardless of your data; everybody knows that the privates have more money,” she said.
Capaldi cited a few problems with the company’s methodology: That it merely trolls publicly available documentation, such as catalogues and university Web sites, to obtain information on the number of faculty in a department (its denominator for many calculations); that it doesn’t make its proprietary database available for cross-checks and validation; and that the denominator for many of the company’s per-capita productivity calculations is the total number of faculty in a department, regardless of teaching load or appointment. (For example, a professor hired solely to do research isn’t treated any differently from a lecturer; Carnegie Mellon's Lee, for one, speculated that a high proportion of pure research professors in the university's robotics institute probably helped with the university's stellar performance in the Academic Analytics rankings).
In 2006 rankings  provided free by the Center for Measuring University Performance , for which Capaldi serves as a co-editor, six public universities were in the top 10 in terms of total research monies received (The data, Capaldi said, come from 2004 National Science Foundation numbers). Of those in the top 10, four publics -- the universities of California at Los Angeles, Michigan, Wisconsin and California at San Francisco -- were in the top five. "Weird things happen when you divide by number of faculty," Capaldi said of the denominator favored by Academic Analytics.
“This is a way to try to get publicity to get money,” she added, in reference to the release of the Academic Analytics data. “It’s like U.S. News to me -- it’s not an intellectual endeavor where you actually try to analyze and figure out what faculty are doing so you can compare them in a meaningful way.”
In response, Martin of Academic Analytics said that due to drastically different teaching loads and expectations, it would be difficult to distinguish one type of professor from another in the data set. "We don't have any way of knowing what the expectations are; nor are we really casting judgment on what they're doing," said Martin, who added that high research productivity isn't necessarily good in itself and could in fact be equated with, say, minimal attention to undergraduates in some circumstances. Martin said the database would be made available to any researcher with a legitimate academic purpose, whether that is validating the company's rankings or using the data for an entirely different study. He also explained that all universities are asked to verify the faculty lists (though Capaldi wonders why institutions should devote resources to cross-checking the company’s data only to support its profit-seeking model).
For his part, Abele, who independently verified the Florida State data, said that the university has had mixed results in its work with the company: The first time Florida State officials received data from Academic Analytics, he said, they were able to confirm the statistics “dead-on” -- the second time, not so much.. He thinks in the second round, the company relied too heavily on taking data from the Florida State Web site and had enlarged its scope to cover too many disciplines too quickly: “I think it’s a good service,” said Abele, who just described it as young and a bit buggy.
Meanwhile, at the University of Kentucky, which turned to the company so it could better gauge its progress as administrators seek to push the university into top 20 status, staff are still in the process of validating the Academic Analytics numbers, Jeannine Blackwell, dean of the graduate school, said Monday. Blackwell said that while she expects the data to be helpful in offering specific institutional comparisons across sectors -- how does Kentucky's medical school compare to that of, say Vanderbilt University, a competitor she’s eyeing? -- the data do not reflect differences in institutional missions (or teaching excellence), and so might not be the most appropriate tool for blanket comparisons of public and private colleges.
“I think it’s a challenge for public universities because our mission is so different,” Blackwell said. "We have a heavy service mission. When you think about some of the faculty members who by the very definition of the university need to be out in the field and helping with translational and applied knowledge in the field, that takes a tremendous amount of time away from pure lab science that’s going to lead to publications as they’re measured by most research measures," Blackwell said.
“I think that just mining the Internet for data is not likely to lead to a comprehensive set of data,” said Andrew Szeri, associate dean in the graduate division at Berkeley. Szeri was charged with compiling Berkeley data to submit for the 2007 National Research Council's ratings.
“I’m very keenly looking forward to whatever it is the National Research Council will have to say to us. It’s not a perfect vehicle for measuring doctoral programs, one against the other, but I think they have made a conscious effort to collect data directly from the source ... that’s the one I’m going to take the time to read carefully.”
Academic Analytics's Average Ratings for Public and Private Institutions, by Discipline
|Discipline||Avg. Rating (Private)||Avg. Rating (Public)|
|Ecology & Evolutionary Biology||21.07||33.05|
|Civil & Environmental Engineering||41.83||65.33|
|Communication Sciences & Disorders||15.67||28.45|