You have /5 articles left.
Sign up for a free account or log in.
Each year, thousands of students and families read the U.S. News & World Report ranking of colleges to help them find the best college for their educational future. People use rankings, ratings and reviews to buy nearly everything. Why wouldn’t they use rankings to pick a college?
The rankings currently consider several factors—graduation rates, first-year retention, faculty salaries, class sizes, student debt and student selectivity, among others. U.S. News allows readers to review the various factors that contribute to a university’s overall score and decide for themselves if they are appropriate and important.
One factor in the U.S. News scoring system that has raised the ire of many college and university officials over the years is undergraduate academic reputation. This metric is compiled through a survey of presidents, provosts and enrollment managers across the nation. It is the only subjective portion of the U.S. News metrics and is worth a whopping 20 percent of the overall score.
Unfortunately, the peer-assessment survey rigs the results—unintentionally perhaps—in favor of colleges with larger national brands and leaves little room for up-and-coming institutions that are innovating and better preparing students for life postgraduation. What’s worse, only about a third of voters even return the survey, compounding the issue and making each person’s rating that much more impactful to an institution’s overall ranking.
You could ask almost anyone to name the top five colleges in the country and there would likely be consensus on the top institutions, although their precise order could be debated. What would happen if you asked for the top 10, or top 20, top 100, or to rank every college? The more colleges you try to rank, the harder the task becomes.
We all have biases regarding any number of subjects. So, it shouldn’t be a surprise that U.S. News voters often rank colleges either in self-interest or based on a college’s historical success. It’s almost like filling out your bracket for March Madness.
At Florida State University, for example, our performance on student success metrics have soared over the past decade. Our retention rates are now 95 percent, our four-year graduation rate is among the top 10 for public universities, and we eliminated graduation rate disparities among our diverse student body. And yet, our reputation scores over the past 10 years have remained flat.
It is safe to say that Florida State and many other colleges—public and private—would place differently without the peer-assessment metric. It is too often based on historical success—not current performance.
None of this is to suggest that we do away with rankings or even completely delete the peer assessment from the process. Rankings are a valuable tool that can help students and families make the best choice for the future. And U.S. News does a monumental job in compiling its annual publication.
However, now is an ideal time to evaluate how the rankings are developed. With many colleges and universities making the SAT and ACT exams optional, U.S. News will likely need to look at the overall data and weights given to certain categories.
We have two proposals when it comes to the peer-assessment portion.
First, for those sent the survey: don’t underrank a college unless you know their mission and have access to performance data on how they are fulfilling it. U.S. News gives us an out. They say if we don’t know, don’t rate the college. That is too easy. If we don’t know, let’s find out.
When the peer-assessment surveys are released, U.S. News can give voters a link to an objective data set on each school to review. This will allow voters to carefully consider what each institution is doing in real time and not vote based on limited personal knowledge about a university.
Second, reduce the weight of the peer assessment and redistribute the weights across student outcome measures. This adjustment would still have Harvard University and many highly ranked universities place among the best in the overall rankings because their other data points are so strong. But putting so much emphasis on a subjective metric favors legacy institutions, whether they deserve it or not, and dilutes the rigor of the rest of the ranking system.
A more accurate peer assessment will lead to a more precise overall ranking, which in turn will help students and families make better decisions. Don’t we owe that to our students and anyone else who uses these rankings to make such important life decisions?