You have /5 articles left.
Sign up for a free account or log in.

A Vanderbilt University campus building, surrounded by leafy trees.

SeanPavonePhoto/Getty Images

All of us have our own markers that tell us fall is here. It might be apple or pumpkin picking, cooler days and nights, or the arrival of the college and professional football seasons. For those of us in the college counseling world, the National Association for College Admission Counseling conference is a line of demarcation between the start of the academic year and the onset of recommendation-writing season, which will consume our waking hours, and perhaps our dreams, for the next couple of months.

The release of the U.S. News rankings has never been on that list for me. I neither look forward to nor pay attention to the rankings in most years, and I’m always annoyed by the annual local newspaper stories highlighting how universities and colleges in my state have moved up or down the rankings. Is that really news?

There were, however, two noteworthy events arising out of this year’s rankings release. The first was a change in the methodology used in compiling the rankings, described by U.S. News as “the most significant methodological change in the rankings’ history.” U.S. News is placing greater emphasis on outcomes related to social mobility—on graduating students from all backgrounds, with manageable debt and set up for postgraduate success—while removing class size, the percentage of faculty with terminal degrees, alumni giving, high school class rank and the proportion of students taking on federal loans as ranking factors.

The other is the reaction to the new rankings on the part of colleges and universities whose rankings dropped as a result of the changes in methodology. The most prominent of those was Vanderbilt University, which saw its ranking among national universities drop from 13th to 18th.

On the morning the rankings were released, Vanderbilt chancellor Daniel Diermeier and provost C. Cybele Raver sent an email to students, faculty and alumni addressing the drop in rank. The statement was a little over-the-top, leading a columnist for Vanderbilt’s student newspaper, The Hustler, to label the email “damage control.”

Vanderbilt’s leaders described the new rankings methodology as “disadvantaging many private research universities while privileging large public institutions.” Their statement took issue with U.S. News’s new emphasis on social mobility: while acknowledging that social mobility is an “important consideration, to be sure,” they argued that it is nevertheless misleading for U.S. News to “commingle this policy concern with measures of educational quality.”

They also argued that the metrics used in the old methodology were better measures of “quality,” and they described the changes as reflecting “incompetence and lack of rigor” on the part of U.S. News.

There’s a lot to unpack here.

Let’s start with the changes in U.S. News’s methodology. For years the rankings have justifiably been the subject of criticism, with one of the major beefs being that U.S. News focuses on input factors rather than output factors. The methodology change is an attempt to respond to that criticism, and U.S. News deserves credit on that front.

The bigger question, though, has always been whether the metrics used in the rankings actually measure what they are purported to indicate. For years admissions selectivity was a major metric. But does admissions selectivity tell us anything about academic quality? The belief that “the harder a place is to get in to, the better it must be” is a suburban legend.

Or take the alumni giving rate, one of the metrics removed this year. U.S. News used to suggest that it measured alumni satisfaction, but doesn’t it really measure the effectiveness of an institution’s development arm?

So is the focus on outcomes and measures of social mobility a better approach? That’s actually two different questions.

A spokesperson for U.S. News told Inside Higher Ed that outcome measures like student debt and postgraduate income are more important indicators of value. That would seem to beg the question of why it has taken U.S. News until now to put more emphasis on those factors while eliminating others.

It also raises the question about whether U.S. News is fundamentally changing what the rankings are intended to measure. Student debt and postgraduate income may be important indicators of “value,” but is that the same thing as academic quality? That’s at the heart of Vanderbilt’s criticisms.

The broader question is whether social mobility should be one of the overriding goals of colleges and universities. I think the answer is yes. Higher education has a responsibility to society to be an engine of access and opportunity for traditionally underrepresented groups. Research like that done by Raj Chetty and his co-authors demonstrates that what U.S. News calls America’s “best” colleges aren’t distinguishing themselves on the social mobility front.

Let’s turn to Vanderbilt. The statement issued by the chancellor and provost comes across as tone-deaf, but we all know that they were likely bombarded with panicked emails from alumni and parents asking what’s happening in Nashville that has led to a drop from 13th to 18th in the rankings.

The answer, of course, is that the change in ranking is a function of the change in methodology. Vanderbilt is just as good a place as it was a year ago, and one of the flaws in the U.S. News rankings is their false precision. How much difference is there between institutions five places apart? I wish Vanderbilt had stuck with that line of argument rather than attacking the new rankings methodology as “flawed,” marred by “incompetence and lack of rigor.” The same charges could have been made with the old methodology.

I find myself sympathetic to Vanderbilt’s argument that the data on indebtedness and postgraduate earnings are incomplete because U.S. News sources them from the Department of Education’s College Scorecard. That scorecard only reports those metrics for students receiving federal aid, so in Vanderbilt’s case it leaves out two-thirds of its graduates.

On the other hand, I am particularly bothered by Vanderbilt’s characterization of the new methodology as “privileging large public institutions” with higher percentages of Pell Grant recipients and first-generation students. Any methodology is going to advantage some institutions and disadvantage others, but the use of the word “privileges” is too strong, too emotional and plain wrong.

Vanderbilt is in some ways a victim of its own success. It is among a group of nouveau riche institutions that have become dramatically more selective over the past 30 years. In the 1990s, Vanderbilt admitted 65 percent of its applicants (per, ironically, the 1993 edition of the U.S. News ranking), whereas today that number is under 10 percent. Is Vanderbilt that much better today? Probably not. Has that success led to institutional hubris? Perhaps.

The statement by Diermeier and Raver is also a reflection of the new definition of “American exceptionalism” exemplified by politicians like Donald Trump and Kari Lake, where you take exception to any result that doesn’t go your way.

The reality, of course, is that neither input factors nor output factors come close to measuring what is most important about a college education—the experience a student has in and out of the classroom while in college. That’s extremely difficult to measure, but trying to rank colleges without that component is like trying to rank “America’s Best Churches” without taking into account spirituality.

Is there any way to fix that, to determine just how much value a particular college or university adds?

I’d like to suggest an experiment. Author Malcolm Gladwell has suggested that prestigious colleges are “selection effect” institutions rather than “treatment effect” institutions, with their prestige reflecting whom they are able to select rather than what value they add. I’d love to see a higher education version of the movie Trading Places, with a university like Vanderbilt trading its student body with that of a college that is much less selective and much more socioeconomically diverse. Would the outcomes be any different?

Jim Jump recently semiretired after 33 years as the academic dean and director of college counseling at St. Christopher’s School in Richmond, Va. He previously served as an admissions officer, philosophy instructor and women’s basketball coach at the college level and is a past president of the National Association for College Admission Counseling.

Next Story

Written By

More from Views