Whatever you do, don't call them rankings.
The National Research Council formally got under way  on Tuesday with its new assessment of doctoral programs, which has had a makeover since the heavily used and widely respected review was last conducted in 1995. Among the major changes in the ratings system and methodology:
- Programs will be measured (and probably grouped in bands, rather than listed from No. 1 on down) based on a set of quantitative data rather than scholars' ratings of their peers.
- A significant expansion in the number of disciplines to be reviewed, as well as a decision to gather information on (but not rate) a set of "emerging" fields, including such diverse topics as nanoscience, feminist and sexuality studies, and science and technology studies.
- Universities will have to pay to participate.
The NRC's assessment of research doctorate programs is the gold standard for anyone -- faculty members and potential graduate students choosing where to work and study, deans and department chairs figuring out where their programs measure up to their competitors -- seeking a national, standardized way of measuring the quality of graduate programs in dozens of disciplines. It has been conducted twice before: in 1995  and 1983. The next version will be released in 2007, with data collection to begin next spring.
In 2003, the NRC, which is part of the National Academy of Sciences, appointed a panel to review its methodology for the survey. The resulting report  concluded that despite the ratings' authoritativeness, numerous weaknesses undercut their effectiveness. Prominent among them -- and this will sound familiar to anyone who follows the continuing debate over higher education rankings -- was the idea that the use of "exact numerical rankings encouraged study users to draw a spurious inference of precision."
That was especially true, according to the panel, because the study rated doctoral programs primarily based on the views of scholars about which were the best programs in their fields, rather than on more objective and quantifiable data.
The NRC's new methodology (which is still a work in progress, to be refined between now and next spring by a panel of administrators and scholars ) aims to resolve both problems.
First, the NRC will no longer base its ratings of programs on what Charlotte V. Kuh, who oversees the council's ratings, calls the "soft" measure of scholars' opinions. Instead, the NRC will base the ratings on a slew of statistics on such traditional subjects as research funding and faculty publications, and on a new set of data it is collecting about how students are treated and how they perform, including attrition rates and the time it takes students to complete their degrees. ( The questionnaire for institutions  also seek statistics that take into account how well programs prepare graduates to teach, among other measures.) One major task awaiting the NRC panel, Kuh said, is to decide how to meld the various statistics into a cohesive rating.
The panel, which is headed by at Jeremiah P. Ostriker, Charles Young Professor of Astronomy and provost emeritus at Princeton University, also has yet to decide exactly how to report its ratings. But in all likelihood, Kuh said, the next version of the NRC ratings will group institutions into wide bands, rather than listing them in numerical order. "That will give a much more realistic picture of how even the top programs are," and deemphasize slight differences in program quality that might separate two programs in traditional rankings but are "probably meaningless," said Kuh, who is deputy executive director of the research council's division on policy and global affairs.
The list of broad disciplines  to be rated by the research council will increase drastically this time around, to 57 from 41. Among the fields that will be rated for the first time, because the NRC perceives that they have developed since 1995 into separate disciplines, are biological and agricultural engineering, theater and performance studies, communication, and immunology and infectious diseases.
For the first time, institutions whose programs wish to be rated will have to pay part of the costs of the NRC study. Kuh said that was prompted in part by a question from an official at one of the federal agencies that traditionally pick up about three-quarters of the tab for the ratings, who asked her, "Why don't the people who benefit from this contribute to it?" Institutions will pay between $5,000 and $20,000 based on the number of Ph.D.'s they award annually.
Kuh and other officials affiliated with the revamped NRC ratings hope, first and foremost, that the new system produces a more useful and accurate assessment of the quality of doctoral programs. But they also admit to a slightly broader (though perhaps unattainable) goal: doing their part to ease the rankings madness that has taken hold in higher education, and society at large for that matter.
"We do hope it can inform the larger discussion," said Kuh. "We hope that we can move away from the horse race, although everyone loves the horse race. But graduate education is not a horse race. It's about preparing lots of students for lots of different occupations."