You have /5 articles left.
Sign up for a free account or log in.
The entire higher-ed-policy Twitterverse was abuzz Saturday morning with news of the release of Education Department data on the performance of colleges across America.
After an opening sentence like that, many folks probably stopped reading. So it goes.
The Obama administration had talked for years about doing some sort of college rankings or ratings, only to back off as it became increasingly clear that both the politics of it and the details of it were daunting. So instead, it simply released the data. Now you can do your own ratings.
As always with statistics, be aware of the story you’re trying to tell. Use the numbers as reality checks, and as cues for unearthing counterintuitive findings, but don’t treat them as gospel.
If you use the site the way that most non-specialists will -- looking up school-by-school -- you’ll be confronted with three numbers for each school: average annual cost, graduation rate, and salary ten years after graduation. Each is given a line to represent the national average.
Be careful.
I did a quick read of the policy paper that came with the data, and made some notes. These are first thoughts; I’m open to correction on any of them.
To its credit, the official document largely concedes that the second criterion, IPEDS graduation rates, makes no sense. It notes specifically that the IPEDS definition of a graduation rate gives short shrift to community colleges (p. 21). But it features the flawed number in its headline data anyway.
It makes a gesture towards acknowledging an issue with the third criterion, noting that while the bar charts show a single average, there are actually very different averages for two year colleges as opposed to four year colleges (p. 24). Lumping the two together makes most community colleges look artificially bad, and most four-year colleges look artificially good. (You may start to notice a pattern here…) It also implicitly assumes that the third and fourth years of college add no value. If that’s true, we have a much bigger problem.
But the flaw in the second criterion also feeds into the third. Many of the most financially successful alums of community colleges never actually graduated from them; they did a year, and then transferred. Those students show up as dropouts at the two-year level, and they’re invisible at the four-year level. Their salaries don’t get counted. Given that the “graduation or transfer” number for most cc’s is about double the “graduation” number, this is not a small point. The paper concedes difficulty in tracking transfer students (p. 34), but never addresses the possibility that it’s creating a substantive distortion by simply ignoring them.
The flaw in the graduation data is most striking when the paper notes that “[f]or two-year schools, completion rates appear largely unrelated to repayment rates, calling into question what types of quality information might be reflected in completion rates.” (pp. 55-6). Well, yes. Yes, it does. So why lead with such misleading data?
(In reference to four-year schools, I was struck by the finding that “only about 5 percent of the variation in earnings across students who attend four-year schools is explained by the institution those students attend.” p. 49. If that’s largely correct, then the entire premise of “performance funding” is flawed.)
These may sound nitpicky or defensive, but they add up to something serious. Yes, trained academic researchers can do much better analyses now than before, and that’s great; I hope some of them come up with tools or discoveries that move the discussion forward. But putting such badly flawed data in such innocent-looking, user-friendly bar charts implies a solidity to them that they don’t warrant. I’d hate to see potential students make decisions based on information with such serious flaws.
My proposal? For now, dump the user-friendly part. Let the wonks have at it. Let’s not pretend to certainty we don’t have. And in the future, please do big releases on weekdays...