Counting Students Equally?

The Education Department's ratings framework embraces the concept of adjusting outcomes for student demographics -- an approach that would be unusual for the federal government but that isn't without its critics.

January 30, 2015

A core premise of the Obama administration’s college ratings plan -- and one that makes it controversial -- is that colleges and universities need to be held more accountable for student outcomes.

College presidents have repeatedly argued that those outcomes, like completion rates and graduates’ earnings, are largely a reflection of the student population they serve, and therefore not necessarily a good benchmark of their institution’s success. A ratings system, they warn, could discourage colleges from recruiting students they're not confident will graduate.

U.S. Department of Education officials working on the ratings have long said they’re going to overcome that problem by comparing colleges' performance only to that of other institutions with similar missions.

But in the 17-page ratings framework released last month, officials also said they’re eyeing an additional strategy to make fair comparisons: adjusting a college’s outcomes based on the demographics of the students it enrolls.

That approach is largely unprecedented in federal higher education policy. The standards to which colleges are now held by the federal government's aid programs do not generally take student demographics into account.

It’s also a controversial approach that some are criticizing for setting up lower expectations for colleges that serve disadvantaged students.

Department officials said they are exploring the possibility of using a statistical model to predict a college’s graduation rate and graduates’ earnings based on the demographics of its student body. They would then compare colleges’ statistically expected outcomes to their actual outcomes.  

Among the student demographic information that the department is considering including as part of that regression analysis: family income, parents’ education attainment, age, gender, marital status, veteran status and zip code. The department's list did not include race or ethnicity. The federal aid application does not ask for such information.

Adjusting a college’s graduation rate or its graduates’ earnings data for those data points, department officials wrote, would “provide a more fair assessment of institutional performance to the public than one that relies solely on raw outcome data.”

The department’s proposal for adjusting outcomes embraces, to some extent, what public universities and others have been seeking.

The Association of Public and Land-grant Universities has called on the administration, in lieu of a ratings system, to hold colleges accountable for outcomes like completion rates and graduates’ employment rates -- but only after first taking into account “student readiness.” (An earlier version of this paragraph incorrectly said the APLU plan includes graduates' earnings; in fact, it calls for a look at employment levels, which may be defined by some minimum amount of earnings.)

Michael Tanner, the APLU’s vice president for academic affairs, said that the group was still working on how a regression analysis should work but that it would allow much more fair comparisons between institutions.

Without making an adjustment, he said, “the effect is that almost every institution can improve just by becoming more selective.”

But others have criticized making “input adjustments” to student outcome metrics.

David Bergeron, a former Education Department official who is now vice president for postsecondary education at the Center for American Progress, largely praised the administration’s ratings outline but said he was concerned about adjusting outcomes.

“If you do a statistical manipulation that says, ‘We know that students who come from 150 percent below poverty [line] are half as likely to complete,’ then we’re really saying that those students don’t matter as much as the more affluent students,” he said. “That, I find, morally problematic.”

“Doesn’t the student who has everything against them -- aren’t they entitled to be counted and treated with the same level of commitment to their outcomes as the student who has no risk factors?” he added. “That’s my fundamental concern.”

Mary Nguyen Barry, an education policy analyst at Education Reform Now, a progressive think tank, said that while it is appropriate to adjust outcomes for differing groups of students based on varying levels of academic preparation, like their high school grade-point average, she opposes using some of the metrics the department has floated, like gender or income.

“If you adjust for those factors, you’re attributing different expectations to different groups of students,” she said.

Adjusting standards for colleges that take student demographics into account is also an approach that the Obama administration has previously rejected in other areas, too. During debates on gainful employment, the administration, over the objections of for-profit colleges, said it wanted to hold all institutions to certain minimum standards -- even if they enrolled large numbers of low-income students, for instance.

Other standards that the federal government currently has for colleges -- cohort default rates, for instance -- do not generally take into account income levels and other student-level demographics.

Robert Kelchen, a professor of higher education policy at Seton Hall University, has developed an input-adjusted model as part of his work on Washington Monthly’s rankings of colleges.

“Something needs to be done to account for the different students that colleges serve,” he said. “The question is how you do it. Whenever you do input adjustment you always run the risk of promoting what was famously called ‘the soft bigotry of low expectations.’”

Asked last month about whether adjusting student outcomes would create different standards and expectations among different types of students, Undersecretary of Education Ted Mitchell said that the department is still wrestling with the issue.

"We think that it's important to get comment from the field about whether that kind of adjustment is worthwhile or not,” Mitchell told reporters. "Our goal here is not to create different sets of standards but to make sure that we are measuring like [institutions] against like."

Share Article

Michael Stratford

Michael Stratford, Reporter, covers federal policy for Inside Higher Ed. He joined the publication in August 2013 after a stint covering the Arkansas state legislature for The Associated Press. He previously worked and interned at Kiplinger’s Personal Finance magazine and The Chronicle of Higher Education. At The Chronicle, he wrote about federal policy and covered higher education issues in the 2012 elections. Michael grew up in Belmont, Mass. and graduated from Cornell University, where he was managing editor of The Cornell Daily Sun.

Follow him on Twitter:

Back to Top