You have /5 articles left.
Sign up for a free account or log in.

The world may or may not need another college rankings system; on that question, commentators and pundits are divided.

The creators of a new entry acknowledge the limitations of the genre, but argue that their version -- imperfect as it may be -- improves on the competition by analyzing thousands of colleges of all types (instead of hundreds of mostly selective ones) and assessing them based on how much the institutions themselves contribute to the economic success of their graduates.

In a report (with associated data set) published today, called "Beyond College Rankings: A Value-Added Approach to Assessing Two- and Four-Year Schools," two Brookings Institution researchers offer a complicated tool designed to help consumers and policy makers gauge how thousands of two- and four-year institutions prepare students for the workforce.

The authors' approach is distinctive in numerous ways, several of which are also likely to make it controversial in some quarters.

First, it covers a much more expansive set of two- and four-year colleges (several thousand) than do rankings by U.S. News & World Report, Forbes and Washington Monthly, which typically focus on selective colleges because students' admissions credentials are so central to their criteria. Second, by adjusting for the traits of the students the colleges enroll (so that highly selective colleges aren't rewarded for admitting only well-prepared students who would fare well under any circumstances), the system purports to measure how much the institutions themselves contribute to their graduates' economic success.

"We thought it would be much better to have a value-added system than one that rewards elite colleges for attracting the most-prepared students," said Jonathan Rothwell, the lead author and a fellow in Brookings's Metropolitan Policy Program.

A Focus on Economics

A news release about the Brookings report boasts that it provides insights into "how well colleges prepare students for high-paying careers."

The think tank's approach probably lost some of you right there, by defining student success purely in economic terms.

But Rothwell, who cowrote the paper with Siddharth Kulkarni, a senior research assistant at Brookings, cited both practical and philosophical reasons for doing so. First, "it's much easier to measure economic outcomes than other outcomes" -- say, student learning or graduates' contributions to society -- "with precision," he said.

Second, Brookings's Metropolitan Policy Program focuses on urban and regional development, so the framework was originally developed to understand how successfully postsecondary institutions prepare students for good jobs in their cities and regions.

And third, "even the most committed defenders of the liberal arts or humanities would acknowledge that jobs and economic outcomes matter," Rothwell said, "so if we can come up with a better way" of calculating which institutions are preparing graduates to succeed in the workplace, "that's worth doing."

The primary way that the Brookings approach differs from U.S. News and other existing rankings is in its attempt to control for institution type as well as for the students a college or university enrolls, so it is possible to compare the performance of an individual institution's graduates with those of institutions with similar characteristics and students.

Institutions are measured on several outcomes for their alumni:

  • Midcareer salary (drawn from PayScale)
  • Federal student loan default rates
  • Occupational earnings power (representing the average salary of the occupations its alumni hold, drawn from LinkedIn and federal data)
  • And the college's added value on those three measures.

The researchers then control an institution's outcomes for a set of student and institutional characteristics. The former focuses on the type and location of the college -- Carnegie classification, distribution of degrees awarded, state, etc. The student traits include demographic characteristics such as age, race, gender and the percentage of students who come from in state or are foreign born; the proportion who receive Pell Grants, federal loans and other aid; and imputed scores on standardized math tests. Controlling for students' wealth and academic preparation is important, Rothwell says, because a failure to do so biases rankings by crediting colleges for outcomes that are attributable more to the type of students they enroll than what the institutions themselves contribute.

Controlling for those student and institutional traits allows the researchers to tease out differences between institutions' outcomes and what would have been predicted based on their own characteristics and those of their students.

Some of those differences are attributable to what the Brookings researchers call "college quality factors" -- variables that affect alumni performance but, unlike the institutional and student traits, are within a college's control. These are:

  • Curriculum value (a calculation of the labor market value of the institution's mix of majors)
  • Share of graduates prepared for STEM fields
  • Value of alumni skills (the market value of the 25 most common skills on the LinkedIn pages of a college's graduates)
  • Graduation rate (measured at twice the standard completion time, e.g., eight years for a four-year degree or four years for an associate degree)
  • Retention rate
  • Institutional aid per student
  • Average salary of instructional staff

Not surprisingly, perhaps, students who attend colleges with high graduation rates and a strong curriculum value (those with a mix of course offerings and majors heavy on high-paying science and technology fields) outperform those of other institutions, as do graduates of colleges where the value of alumni skills is high.

While the Brookings analysis finds that those "quality factors" are significant drivers of the added value a college delivers to its graduates in terms of their economic outcomes, some institutions show large difference between their predicted outcomes and their actual outcomes that cannot be attributed to those "observable" quality factors.

Three of the six four-year institutions that Brookings finds have the biggest gap between the predicted and actual midcareer earnings for graduates are the California Institute of Technology, the Massachusetts Institute of Technology, and Rose-Hulman Institute of Technology, all of which have significant observable quality factors given their technological bent. Caltech's large value-added score is entirely attributable to its STEM orientation, the mix of majors and the like, Brookings finds.

But the other three institutions in the top six are Colgate University, Carleton College and Washington and Lee University, all liberal arts institutions. Brookings characterizes the unobservable reasons why an institution might provide a large value-added boost to its graduates as "x factors," and attributes 59 percent of Colgate's value added to such unobserved factors. "It's not the majors that are driving their student success, and it's not the skills they list on résumés," says Rothwell. "It may be they have access to great teachers, it may be that their alumni networks are strong."

The comparable listing for two-year colleges shows technical colleges like New Hampshire Technical Institute and Texas State Technical College in Waco benefiting from mostly observable quality factors, while Pearl River and Pueblo Community Colleges show significant x factors.

The authors acknowledge the flaws in Brookings's value-added approach. "The biggest limitation of this approach… is that there are many student characteristics for which this analysis cannot account but that may influence students’ eventual economic outcomes," such as student grades, aspects of writing ability and leadership.

Tentative Praise for the Model

Researchers given a chance to review the Brookings paper had a range of views about it, after an initial look.

Darryl G. Greer, a senior fellow at Stockton University's Center for Higher Education Strategic Information and Governance, said he was concerned by the focus on postcollege earnings as the key outcome measure, and surmised that "the sophisticated calculations may simply be measuring the effect of high-value occupations and local economies, and their relationship to students' choices of academic programs of study, rather than particular institutional strength."

And Sandy Baum, a professor of higher education at George Washington University's Graduate School of Education and Human Development, questioned whether the precision the study claims for differences in individual institutions' outcomes is legitimate given some of the imperfect sources of data, like PayScale surveys and analysis of LinkedIn pages.

Nate Johnson, of Postsecondary Analytics, said that he had lots of quibbles about how the Brookings report uses data and that the analysis is a long way from a product that could actually be used by consumers or policy makers to make decisions (which the researchers concede). But Johnson also applauded the Brookings approach for "trying to measure the right things for what college accountability and student choice should be," not focusing on "input measures like U.S. News." 

"It's a big step in the right direction," he said.

Robert Kelchen, an assistant professor of education leadership, management and policy at Seton Hall University, agreed.

"This report reflects a much-needed effort to examine the outcomes of all portions of American higher education, instead of just the most-selective four-year colleges," Kelchen wrote in an email. "Part of the report, even under a 'value added' framework, tells us what we already know -- the way to make money is to major in engineering instead of the humanities. But the differences across colleges that have similar mixes of majors are still substantial. The data used for two key measures (from PayScale and LinkedIn) are far from perfect and do not necessarily reflect all graduates, while not even attempting to examine dropouts. However, given a lack of data on labor market outcomes at the national level, this report still reflects an improvement over the status quo."

Rothwell concurs. "Basically, the approach here is that we assembled what we could out of imperfect spare parts, which I strongly believe is better than either no information or current ratings systems," he said. "I’m sure someone else will do this at some point and hopefully do it better, but this is a start."

Next Story

Written By

More from Learning & Assessment