Many in the academic community despise college rankings and the implicit associated “grading” of universities. This is terribly ironic since universities depend on metrics such as SAT scores, high school grades, GRE tests and the like to assess the competency of students for admission. Likewise, they use student grades, faculty teaching evaluations and endowment growth figures as metrics to compare students, faculty and institutions with one another.
The emphasis on rankings has three root causes. First, parents love their children, and want the very best for them given their financial constraints. Hence parents and students eagerly devour college rankings. Second, Americans are by nature competitive “can do” people who admire and reward merit and excellence. Where else in the world do 100,000 people pay $60 a ticket to sit in uncomfortable seats to watch college kids compete by throwing a ball around (college athletic departments have no problem with performance metrics or rankings!).
Third, the failure of colleges themselves to provide virtually any information on the value that they add to their student’s knowledge, critical thinking skills, moral character, leadership qualities or any positive attribute forces the public to look to outsiders for evaluations. Accreditation agencies could do this, but being controlled by the colleges themselves, they provide little meaningful information to the public, since accreditation reveals little about institutional quality.
Therefore, rankings are useful, trying to distinguish the great from the mediocre, the good values from the rip-offs. U.S. News & World Report’s rankings are thus popular and the public pays good money to get them. U.S. News meets a strongly felt need. Next to the purchase of a home, the decision about college is the largest non-financial investment decision most families make, and they need help in assessing what they are buying, just as Consumer Reports and J.D. Power and Associates help us overcome the information costs associated with buying a car or television.
At the same time, given the lack of any standardized measures of “value added,” ranking colleges involves using methodologies whose appropriateness can be criticized. And different approaches yield meaningful, varying results. Let me compare the two most recent rankings, by Forbes and U.S. News & World Report. Full disclosure: I was the lead investigator in compiling the Forbes rankings. (For a critical look at the Forbes ranking, see related essay today.)
Looking at just the 133 schools that U.S. News ranks on its national research universities “tier one” list, or the similar list for 124 top ranked liberal arts colleges, I compared its rankings to those by Forbes. The correlation coefficient in both cases between the rankings was about +.67, suggesting a lot of commonality between the rankings -- but important differences. too.
For example, among the national research universities, six of the top 15 schools in the U.S. News rankings did not make the Forbes top 15 -- University of Pennsylvania, Duke University, Dartmouth College, Washington University in St. Louis, Cornell, and Johns Hopkins. Forbes’s top 15, however, includes Brown, Rice, Brandeis, Boston College, Tufts and the University of Virginia. Northwestern and Washington University in St. Louis are tied for 12th in U.S. News, but Forbes ranks Northwestern much higher (6th vs. 33rd) than Wash U among national research universities.
U.S. News ranks the University of Southern California 27th among national research universities but says it is “up and coming.” While Forbes ranks USC 66th on the comparable list of national research universities, it comes in at a so-so 300 rank among all schools, including liberal arts colleges. Indeed, USC ranks well behind at least six schools in Los Angeles county alone -- the five Claremont Colleges (Pomona, Claremont McKenna, Harvey Mudd, Scripps and Pitzer) and UCLA. Why? USC students don’t particularly like their instructors (as indicated on ratemyprofessors.com), often graduate with a fairly high debt, or worse, don’t graduate at all. USC seems better at raising and spending money than at satisfying undergraduate students.
Among the top 16 liberal arts colleges, U.S. News lists Carleton, Davidson, Claremont McKenna, Vassar, Grinnell and Harvey Mudd colleges, but Forbes does not. However, Forbes has Smith, Hamilton, Barnard, Centre, Wabash, and Whitman Colleges. The contrast with U.S. News with respect to Wabash (6th vs. 54th) and Centre (7th vs. 45th) is particularly startling. The moral of the story for prospective students: look at more than one ranking.
Even more important are two major differences in approaches. First, in compiling the Forbes rankings, both the editors and I felt strongly that all colleges belong together in a single list. When choosing a college, high school seniors often compile a short list with both liberal arts colleges and large research universities. College is college, and a good ranking system compares the undergraduate experience at all types of institutions offering the bachelor’s degree. In doing this, Forbes found on average higher rankings for the smaller schools; only one of Forbes's top 50 schools (the University of Virginia) had more than 10,000 undergraduate students. I would hypothesize that where undergraduate education is the sole or dominant emphasis, students get more attention and thus have a better overall experience.
The second difference relates to methodology. In U.S. News’s rankings, reputation and resources are critical to a high ranking. Indeed, ranks are enhanced by spending more per student, paying faculty higher salaries, or getting more alumni to donate. Generally speaking, the rankings are reputational and input-based. The Forbes’s rankings are much more outcomes oriented (e.g., emphasizing student satisfaction with instruction and postgraduate vocational success of alums), also ranking schools higher when students leave with a lower debt burden.
There are two features of the Forbes rankings that make them somewhat superior in my judgment. First, they require no cooperation from the schools, using only data from external sources beyond their control. Thus Forbes ranks Sarah Lawrence (which refuses to provide data) but US News does not. It is harder to “game” the Forbes rankings. Second, and more important, a statistical analysis of all 569 schools in the Forbes rankings shows no statistically significant relationship between spending per student and rankings. It is spending neutral, and buying yourself to the top of the rankings is not an option.
By contrast, with both the tier one national research universities and liberal arts colleges -- the heart of the U.S. News rankings -- there is a statistically significant correlation between per student spending and performance, suggesting a school with brute financial force can move up in reputation. For those who believe rankings contribute to the academic arms race: hope that the Forbes rankings gain increased popularity over time.
Richard Vedder directs the Center for College Affordability and Productivity, is a visiting scholar at the American Enterprise Institute, and is Distinguished Professor of Economics at Ohio University.