- Rating Doctoral Programs
- Methodology Change for Ph.D. Rankings
- How Ph.D. Programs Will Be Judged
- You're Not No. 1
- 'Substantive' Errors in Grad Rankings
- Quick Takes: The Last Announced Delay in Doctoral Rankings, Yeshiva U. Loses $110M, 'Triple Guarantee' From Manchester, Rice Adds Aid, From Williams to Northwestern, York U. Strike Escalates, Scrooge Is Alive and Well
- Sociologists Blast Doctoral Rankings
- Reconciling with Rankings
Almost Ready for the Doctoral Program Rankings
Even the harshest critics of U.S. News & World Report would have to give the magazine credit for one thing: There's no doubt when the rankings of colleges will come out. They appear like clockwork every fall.
Many educators who scoff at U.S. News cite the evaluation of doctoral programs by the National Research Council as a different category of ranking -- one that is more methodologically sound and rigorous. But when it comes to timeliness, the NRC isn't winning any contests. Its last rankings were released in 1995, and the one prior in 1982. While the research council never had the goal of issuing annual reports or anything close, delays and debates have become common -- especially with word that the next version is due out in February.
Given the importance of doctoral programs to producing research and the next generation of professors, not to mention their high expense at a time of tight budgets, graduate deans very much want to know more about what they will feature.
On Friday, the woman overseeing the rankings project appeared at the annual meeting of the Council of Graduate Schools. Charlotte Kuh used her presentation -- with mixed success -- to promote the idea that universities should not focus on any one overall ranking for their various departments, but should benefit from the new way the information will be presented: it will feature several subcategories that may allow some departments to shine selectively, even if not achieving the overall top rankings many seek. Kuh appeared to succeed in that the graduate deans present seemed to take the subcategories seriously.
But there was not a general calming down of these officials, who remain anxious and in some cases critical of the coming report. Many were focused on planning public relations strategies for when the material is released. Others objected to some of the methodology and Kuh indicated that in some cases, methodology questions are still open for question, while in other cases, there is no perfect solution. (While several deans said later that they were not satisfied with the answers, the general tone of the meeting was collegial, with deans praising Kuh for her willingness to appear before them and to keep them up to date on the project's progress.)
Kuh provided new details on how the NRC is constructing three "supplemental measures" that will be both part of the main rankings and available individually. Although she called them "supplemental," Kuh said that they are actually "essential measures" for doctoral programs. They are scholarly productivity, student outcomes and support and diversity.
In each of these cases, data will support the rankings, but faculty surveys have been used to weight the relative importance of different factors that make up the analyses. While the scholarly productivity measure is closest to the values that shape the overall ranking, Kuh stressed that all of these measures matter. "The quality of doctoral programs is not just about the scholarly productivity and scholarly recognition of program faculty," she said.
For each subcategory, there are further subcategories:
- For scholarly productivity: Average publications per faculty member, average citations per publication, grants per faculty member, awards per faculty member.
- For student support and outcomes: Percentage of graduate students with full support, average cohort completing program in six years, average time to degree, job placement of students, and availability of outcomes data.
- For diversity: Percentage of professors from underrepresented minority groups, percentage of faculty members who are women, percentage of students who are from underrepresented minority groups, percentage of students who are female and percentage of students who are international.
There will be some definition shifts based on discipline. For example, on the measure of percentage of entering cohorts finishing within six years, the measure for the humanities will be eight years. Then, for each subsection of the subcategory, faculty surveys are being used to weight the various factors. So under scholarly productivity, for example, faculty members in the sciences are counting grants as a much larger share than are humanities professors.
The questions Friday didn't challenge the importance of any of the categories, but raised concerns about how they are being measured. One dean said that her agriculture science professors were bothered by the idea that grants are being counted by their number, without regard to their quality, importance or size. So a faculty member who receives $1,000 from a local agricultural producer to study some local problem is counted the same way as a faculty member who pulls down a large, peer reviewed grant from a prestigious national agency. The dean said that there was "a lot of angst" in some disciplines over such apparent flaws in the methodology.
Another dean raised a question about how success is measured in the diversity categories, and was told that the greater the diversity, the greater the score. In many of the diversity categories, that may make sense, and many departments have relatively low percentages, for example, of minority faculty members. But he said that the international students ranking was potentially deceptive under this system. The dean said that any graduate program that doesn't attract any foreign students probably deserves to go down in the rankings. But he said that a program where 95 percent of the students are international isn't necessarily better than one with 40 percent -- and in fact is quite likely a worse program.
"Some of the less strong programs are overly reliant on international students," he said.
Kuh responded to that criticism by saying that "we're going to have to think about that before the report comes out."
But she also said that on many issues, the panel of educators working on the rankings spent time considering alternatives, and had to pick one -- not because that choice was perfect but because some choice was needed to get the project done.
Judging from the responses of deans, most accept that approach in theory, but not necessarily when the choice could make one of their programs look bad. Perhaps anticipating that they will be disappointed, many deans pushed the NRC for as much advance time as possible with the rankings before they are formally released, and asked the NRC and the graduate schools group to be assertive in providing context for reporters about what the rankings mean.
NRC officials said that they would do their best. But those involved in the rankings appeared to be trying hard to get people to understand that they won't like everything in them.
Richard Wheeler, vice provost and dean of the Graduate School at the University of Illinois at Urbana-Champaign, and a member of the panel that worked on the rankings, drew on his experience as a literary scholar for some expectations management. He noted that Henry James called many classic Russian novels "loose baggy monsters." Explained Wheeler: "That's not a bad way to think about the NRC report. But remember, it's worthwhile to read Dostoyevsky and Tolstoy."
Extending his literary comparison, Wheeler also quoted the poet Randall Jarrell, who called novels "prose narrative of a certain length that has something wrong with it." Combine the two quotations, Wheeler said, and you have the coming NRC rankings: "a loose baggy monster of considerable bulk with something wrong with it."
But he stressed that people need to be open minded. "I think it will have good things in it and be useful," he said. "But good is not to say that it will be beautiful or to say that it will be perfect." To those who question some methodological choices, he said they may well be correct. "There’s just nothing in this assessment that couldn't have been done differently."
Search for Jobs