You have /5 articles left.
Sign up for a free account or log in.

We all owe Catherine Watt, the former Clemson administrator who gave a well-publicized presentation on her university’s efforts to improve its U.S. News & World Report ranking, our gratitude.

Since Watt’s presentation in early June, much of higher education has been forced to confront the issue of what institutions do to affect the rankings, and specifically how they fill out U.S. News’s peer “quality assessment” survey. That survey makes up 25 percent of U.S. News’s rankings formula, dwarfing any other factor. Watt’s presentation included a PowerPoint slide that said “Rate (almost) all programs other than Clemson’s below average.”

The open secret around the illegitimacy of these U.S. News surveys is a secret no more.

As law professors, we’ve been following the kerfluffle with great interest. After all, law schools have been dealing with the same rankings problem -- and the same “quality assessment” surveys -- as have colleges and universities.

And though neither of us has any love for the rankings, we’re convinced that they’re here to stay. We have hope for various alternative rankings, but we also think that the academy could change the game of “gaming” the existing U.S. News survey by being bold.

Some recent history: When the U.S. News rankings started, the magazine called its annual survey of university administrators a “reputation” survey. Critics pointed out that “reputation” was a silly factor to include. It is highly correlated with name recognition and preexisting prestige, but wholly detached from real measures of the “value added” of the education at a particular institution.

So U.S. News, to its credit, changed the survey in 2002 to a peer assessment of the “academic quality” of the undergraduate program.

As its weight in the formula signifies, this “quality assessment” survey is at the heart of U.S. News’s efforts to preserve some semblance of legitimacy for its rankings. Whenever U.S. News is attacked on the validity of its rankings, it responds quite reasonably with some version of: "We ask the experts.” As U.S. News’s executive editor told Time magazine a few years ago, the survey is “a very legitimate tool for getting at a certain level of knowledge about colleges.… Who better to ask to evaluate colleges than top college administrators?”

In legal scholarship, we call this a delegation of lawmaking power -- something that legislatures do to administrative agencies, or that the common law of tort does to juries. Educational administrators, U.S. News says, you are to assess the “academic quality” of educational programs, using any and all factors relevant. The rest is up to you.

But the Clemson incident and its aftermath has revealed what has been obvious to many: the fallacy of using this kind of mass survey -- particularly with hugely self-interested respondents -- to assess the educational quality of institutions.

Because most university administrators are likely to rate their own schools and their rival schools, U.S. News is able to report relatively high response rates -- key for the legitimacy of the enterprise. But what U.S. News does not reveal (and it should) is how many responses were received for all of the schools listed in the survey. No doubt, the top schools are rated by hundreds of people, and the less well-known schools are rated by relatively few. So the ratings don’t reveal the same depth of knowledge about “quality” from the best-known school to the least-known one.

Indeed, all the evidence indicates that these “quality assessment” scores are very sticky and are correlated highly with existing prestige. In analysis of the law school rankings, Jeffrey Stake at Indiana University at Bloomington revealed that the strongest predictor of changes in the quality assessment scores was changes in the previous year’s overall rankings. Though there have been important efforts to rank schools by the quality of their research, this doesn’t get at the question asked by U.S. News: the quality of the educational programs. There’s an information vacuum on relative educational quality, and no real incentive for any university administrator, acting alone, to figure it out -- let alone report it honestly.

The lack of competition on quality has serious consequences. It’s a major cause of the enormous effect that the U.S. News rankings has had on admissions and financial aid policies: the move from need-based to “merit”-based aid, and the elevation in the importance of test scores in admissions decisions. In the absence of competition on quality, institutions rationally focus on things that they can change, like raising the incoming credentials of their students by throwing money at the crème de la test-takers.

That’s the bad news.

The good news is that, with a little help from U.S. News, we could fix this problem: by taking our job to assess quality more seriously than we have done in the past, and in doing so, creating incentives for positive competition. In many ways, we pay too much attention to rankings. But here, we haven’t paid enough attention.

We’ll have to change how we do peer “quality assessment” of academic programs for U.S. News. But we actually have quite a bit of experience with peer assessment of quality through things like site visits for accreditation, grant review, and peer-reviewed journals.

How would an improved effort at peer quality assessment work? Former NYC Schools Chancellor Harold Levy recently suggested in a New York Times op-ed that the reports from accreditation teams should be made public. That would be a great start.

One can imagine teams from different regions of the country doing the evaluations: a team from small liberal arts colleges in the Southeast doing evaluations of those in the West, for example, to minimize the home-team bias rampant under the status quo. Or retired professors might be enlisted.

Others may have better ideas on how exactly to measure quality. (One of us is currently exploring the idea of “networking ability” of faculty, students, and alumni as a factor for law schools.) But the point is: if both sides – U.S. News and the professional associations in each area of higher education -- were willing to engage in a real dialogue, we could come up with a more rational means of doing comparative assessment of educational quality. And that ought to mean a system based more on expert evaluators, rather than the equivalent of a public opinion poll.

U.S. News appears open to such a dialogue. Robert Morse, who’s in charge of U.S. News’s methodology for the rankings, has engaged thoughtfully with the higher education community, and made some changes to the methodology in response to feedback. He wants to get it right and indeed is part of an international effort to use best practices for these rankings systems that are becoming more common globally.

Besides who’s doing the evaluating and how, we also have to figure out what exactly we’re evaluating, or rather, the data we ought to use in doing so. For colleges and universities, the good news is that such efforts are underway.

Efforts like The Educational Conservancy’s Beyond Ranking project and the American Association of Colleges and Universities’ LEAP program are trying to build new mechanisms for assessing educational quality. The think-tank Education Sector has made a fascinating proposal to build such a system with existing data like the National Survey of Student Engagement, the Collegiate Results Survey of alumni, and the Collegiate Learning Assessment comparing students’ abilities as freshmen and seniors.

One obstacle is that, right now, much of the existing data is largely private. Efforts to move towards greater transparency of such data is critical to better educational outcomes and to creating a new ballgame on the rankings.

In legal education, these efforts are further behind, though there are groups working to implement the Carnegie Foundation’s landmark report Educating Lawyers and Best Practices for Legal Education report; these groups can play a key role. We’ve helped start a new effort, Race to the Top (we had the name before Secretary Duncan!), that’s working along a similar track.

We might also need an improved method of converting an assessment of quality to numerical ratings. As Ted Seto of Loyola Law School and others have pointed out, there is not enough variance in the 1-5 formulation for meaningful comparison. We might need a more fine-grained approach, where evaluators could use numbers that include a decimal point to rate schools (3.3, 3.7, etc.). Or we might give up on a fine-grained approach entirely, preferring instead to group schools into just a few clusters, informed by real peer assessment -- with each cluster indicating roughly equivalent “quality.”

It may also make sense to further segment the market. Certainly, the law school rankings would benefit from even the kind of segmenting done already for colleges and universities, in which different kinds of colleges and universities are in different categories. For law schools, one could think about national, regional, and metropolitan schools that serve students with different needs and aspirations.

Our message is simple: The quickest way to address the rankings problem is to use the power that U.S. News has delegated to us already and develop incentives for positive competition through this currently flawed survey instrument. If we can accelerate the process by focusing collective efforts on this survey now, then we might be able to use the rankings to improve the quality of the education that schools provide. At least we could draw ourselves away from the self-destructive behavior that catering to the rankings has fostered.

University presidents, provosts, and deans: don’t be passive. You have a chance to take control over the one magazine issue a year that kicks your ulcer into high gear. Imagine a world in which quality assessment focused on, well, real quality: the quality of the educational experience at the institution, rather than the quality of factors over which the institution has little control.

Assessing real quality would enable you to focus resources on improving that educational quality, and institutions already focused on student engagement and educational outcomes would have the initial advantage. Encourage your representatives in the Association of American Colleges and Universities or the equivalent to engage with U.S. News on these issues, and let the race to the top begin.

Next Story

Written By

More from Views