Strategies in an Age of Assessment

Maintaining faculty support and enthusiasm is key when judging graduate programs, but it's also one of the most difficult parts of the job, two deans tell the Council of Graduate Schools.

December 4, 2008

WASHINGTON -- When college administrators craft plans to assess doctoral programs, they might also consider “going into exile” after the results are released.

Such is the advice of Patrick Osmer, dean of Ohio State University’s graduate school. Speaking at the Council of Graduate Schools annual meeting here Wednesday, Osmer half-jokingly told attendees that they might need to go into hiding after conducting a program review on the scale of the one Ohio State recently completed. There are few things in academe, he noted, that are as assured to bring debate -- and controversy -- as an honest introspective evaluation of an institution's strengths and weaknesses.

“One of the challenges [of assessment] is to convince people it’s a constructive exercise, not a punitive one,” Osmer said.

That challenge may have been particularly tricky at Ohio State, where a panel of nominated faculty and external reviewers suggested in April that 5 of the university’s 90 Ph.D. programs were “candidates for disinvestment or elimination.” On the other hand, 29 programs were deemed "strong" or “high quality” and will be given new fellowship stipends to enhance student recruitment.

The five programs with “serious problems” that may be eliminated included welding engineering; soil science; specialty tracks in both rehabilitation services and technology education; and a comprehensive vocational education program, which is already inactive.

The highest-rated programs included astronomy; history; linguistics; political science; psychology; pharmacy; veterinary biosciences; chemical and biomolecular engineering; materials science and engineering; and three separate business Ph.D. programs.

Growing Emphasis on Outcomes

To evaluate its doctoral programs, Ohio State drew largely upon the data the university had already submitted to the National Research Council, which is soon expected to release its much-anticipated Assessment of Research-Doctorate Programs.

In its own evaluation, Ohio State reviewers looked at the incoming qualifications of students, including GRE scores. With a nod to accountability advocates who are calling on colleges to look more closely at student outcomes, however, the university also examined the placement of its doctoral graduates in the workforce, the percentage of students who actually completed their Ph.D.’s and the time it took students to finish their degrees.

Sally Francis, dean of the graduate school at Oregon State University, noted during her own presentation that the university had historically focused more on the “inputs” of its graduate programs, including the grade point averages of incoming students.

“We didn’t ask programs … ‘What does your product look like when it walks out the door? How successful is this student?' ” Francis said.

Oregon State still uses input data, but the university has increasingly looked to student outcomes like job placement as important measures as well, Francis said.

While Ohio State set its own measurement criteria in its recent assessment of graduate programs, reviewers concentrated on the same data that the National Research Council will consider in its upcoming evaluation. In so doing, Osmer acknowledged that the university may invite criticism from across campus if the council sees something different in the same sea of numbers.

“When that [report] comes out, and it contradicts what we found, then what will I do?” he said with a laugh. “I’ll have to work on my own exile plan.”


Be the first to know.
Get our free daily newsletter.


Back to Top