NEW ORLEANS -- U.S. News & World Report is not going to change the way it composes its annual rankings of colleges and universities in a significant way anytime soon, regardless of how admissions counselors and higher education admissions officers feel about them.
That's what the head of the rankings said here Friday at a session of the annual meeting of the National Association for College Admissions Counseling after a special committee from the association presented its final report examining the rankings. The report, which took about two years to compile, is largely based on a survey of NACAC members the committee conducted in 2010 and repeats much of what was established in a draft of the report released earlier this year. Officials from U.S. News worked closely with the committee throughout the process, a relationship that has bothered some rank-and-file members of the association. (The concern was that involving the rankings officials in the discussion might co-opt the process, although the final results suggest that the association did not hold back on its criticisms of the rankings.)
The committee’s report notes that while U.S. News’ rankings are undoubtedly influential, college admissions officials and high school counselors question their value to students applying to college. The report takes issue with the relevance of several components of the rankings -- particularly the reputation survey (in which presidents evaluate similar institutions) and emphasis on incoming students’ class rank and standardized test scores -- and the “distorting effects” they can have on both students and higher education institutions. It calls for U.S. News to remove test scores and class ranks as a component of the rankings, and to reduce the weight of the peer-assessment survey that currently counts for 22.5 percent of an institution’s score.
The main thrust of the committee's argument against the use of standardized test scores is that they are not a complete measure of student quality, and that the continued emphasis on using them as a key metric of comparison discounts other aspects of admissions. A separate NACAC commission focused on testing concluded in another report "that the continued use of admission test scores as a ranking-related measure creates pressure on admission offices to pursue increasingly high test scores." The rankings report also argues that measuring the quality of incoming students discounts what actually happens once students get on campus.
“The committee believes that the research and discussion about the imprecision of ordinal rankings in the ‘Best Colleges’ list has reached a point where not proactively acknowledging the limitations of the ranking formula through vigorous consumer education and flexible ‘ranking’ options risks misleading consumers and has compromised the journalistic integrity of the publication,” the report states.
The committee’s report takes issue with the way the publication styles the rankings as the “Best Colleges” because the weights assigned to different metrics are essentially arbitrary. “There is no objective way to select of criteria for inclusion in a ranking method,” the report states. “Once the criteria are selected, there is further no objective manner for assigning weights to each criterion. As a result, there are an infinite array of possible rankings for colleges and universities.” One of their main concerns is that, given the diversity of higher education institutions, having a list of the “best” is impossible.
Robert Morse, who oversees the rankings for U.S. News, said at the presentation that he and others at U.S. News appreciated the dialogue with the committee and NACAC and the responses presented in the survey. He said officials from U.S. News would work with the association to educate members on the rankings -- another of the report’s recommendations -- but it would not make substantial deviations from its formula because of the report.
As long as colleges and universities continue to weight test scores and class ranking as a crucial component of admissions criteria, Morse said, it is hypocritical for institutions to ask U.S. News not to do the same. For the 2011 rankings, test scores accounted for 7.5 percent of the overall ranking for all institution types, which Morse asserts is probably less than the weight most colleges give the scores in admissions. Morse said U.S. News would consider lessening the importance of students’ test scores and class rankings if institutions discounted such measures. “Schools are using it to build their class,” he said at Friday’s session. “We believe that makes it a credible metric.”
In a recent survey by Inside Higher Ed of college and university admissions directors, 71.7 percent of all four-year institutions said they would continue to require standardized tests for undergraduate applicants. That response was particularly high among public doctoral and master’s institutions, with 93.3 and 82.0 percent respectively saying they will continue to require such tests.
The committee’s report recommends that U.S. News replace entering class metrics such as test scores with output metrics such as student satisfaction surveys and educational attainment tests. Morse called the incorporation of measures of student experience and educational outcomes an “aspirational goal” but one that was infeasible at the moment. Measures of educational outcomes, such as the Collegiate Learning Assessment, are not widely used and the information is rarely made public, he said.
The other area of methodological concern addressed in the report was the reputation surveys that U.S. News sends to college and university presidents. The survey found that most admissions officials felt the peer surveys could not be an accurate reflection of institutions’ quality. “The peer assessments are highly subjective and may be disproportionately influenced by social factors that do not measure institutional quality,” the report states. Questions about the validity of the peer survey came to the forefront of the discussion at a meeting of institutional researchers in 2009 when a former institutional researcher from Clemson University spelled out how her institution had rated peer institutions poorly while rating itself highly.
There was some concern among admissions officials in the audience at Friday’s presentation that the rankings have become a self-fulfilling prophecy. Members of the audience said college and university presidents could rarely understand the quality of undergraduate experience at other institutions, and therefore would have to rely on third-party information, such as the U.S. News rankings, to determine how they evaluate programs.
Morse said he has not seen social science evidence that proves that such concerns are valid. If such evidence turned up, he said, U.S. News might reevaluate how it approaches and weighs the surveys.
The results of some survey questions asked by NACAC of admissions directors show that they perceive a widespread influence of the rankings on their colleagues' decisions, even while denying that they personally make policy based on the rankings. The majority of respondents to the survey said they believed the rankings pressure institutions to make programmatic changes to improve in the rankings, such as accepting students with higher test scores. But the majority of higher education institutions that responded said they do not make any changes to how they operate to improve their rankings.
“While there is little conclusive evidence of widespread gaming of the rankings,” the report states, “the tendency to conform to rankings methodology creates incentives to focus disproportionate resources on data elements that can change rankings without necessarily changing the quality of the institution.”
At previous admissions meetings, sessions about the rankings typically filled rooms and created strong debate. Counselors and college admissions officials noted that it was unusual that the session Friday was only about one-third full.
Some members of the audience said the sparse attendance indicated admissions officials' fatigue regarding the rankings debate. “We’re going to move beyond rankings in this country,” said Lloyd Thacker, executive director of the Education Conservancy and a longtime critic of the U.S. News rankings.
Because the committee found the way U.S. News weights different components of the rankings to be arbitrary and misleading, the report asked both NACAC and U.S. News to work to develop tools that will allow for customized rankings based on individual students' and families’ criteria. According to the survey, admissions officials find the data included in the rankings helpful. “Let’s just use the data and not the rankings,” said Jonathan Reider of San Francisco University High School, at Friday’s session. “We’ve never had a good reason for why we need an ordinal ranking.”
While he was rather dismissive of the report’s recommendations, Morse said the dialogue between admissions officials and the rankings team was important. “We want to have this dialogue,” he said. “It’s important to hear people’s views.”
Read more by
You may also be interested in...
Today’s News from Inside Higher Ed
Inside Higher Ed’s Quick Takes
What Others Are Reading