You have /5 articles left.
Sign up for a free account or log in.
The Wall Street Journal and Times Higher Education have partnered to produce yet another college ranking. Should we applaud, groan, ignore or something else? I choose applause -- with suggestions.
This new project represents a positive step. For starters, any ranking that further challenges the hegemony of what I have termed the “wealth, reputation and rejection” rankings from U.S. News & World Report is welcome. Frank Bruni said much the same thing in his recent New York Times column, “Why College Rankings Are a Joke.”
I traveled the country for two years for the U.S. Department of Education -- I called myself the “listener in chief” -- to hear what students and colleges wanted or worried about in the federal College Scorecard. I explained that one of the most important reasons to develop the College Scorecard was to help shift the focus in evaluating higher education institutions to better questions and to the results and differences that should really matter to students choosing colleges and taxpayers who underwrite student aid. Just this week, the White House put it this way:
By shining light on the value that institutions provide to their students, the College Scorecard aligns incentives for institutions with the goals of their students and community. Although college rankings have traditionally rewarded schools for rejecting students and amassing wealth instead of giving every student a fair chance to succeed in college, more are incorporating information on whether students graduate, find good-paying jobs and repay their loans.
Some ratings have already blazed new trails by giving weight to compelling dimensions. Washington Monthly, for example, improved the discourse when it added public service to its criteria. The New York Times ranking high-performing institutions by their enrollment rates for Pell-eligible students was an enormous contribution to rethinking what matters most. The Scorecard in turn contributed by adding some (but admittedly not all) of the central reasons we as a nation underwrite postsecondary education opportunity: completion, affordability, meaningful employment and loan repayment.
The WSJ/THE entrant also offers positive approaches to appreciate. The project shares some sensible choices with the Scorecard, starting with not considering selectivity. That’s good: counting how many students a college or university rejects often tells us more about name recognition and gamesmanship than learning.
The WSJ/THE rankings also incorporate repayment, which is a useful window that reflects such contributing factors as an institution’s sensitivity to affordability, which in turn includes net price and time to degree. It also grows out of whether students are well counseled and realistic about debt and projected income -- and use their flexible repayment options, like income-contingent choices, so they can handle living costs and also their loan obligations, even if they choose work that pays only moderately.
And it was a very smart choice to use value-added outcome measures, drawing on work by the Brookings Institution melding Scorecard information and student characteristics. That approach is designed “to isolate the contribution to the college to student outcomes.” It is also important because value-added metrics help respect the accomplishments of colleges that are taking, or want to increase enrollment of, populations of students that might not graduate or achieve other targets as easily as others.
That said, the WSJ/THE transition to outcomes-based measures is incomplete. A fresh new ranking is a chance to recognize colleges and universities that do a remarkable job in achieving strong results for students at affordable prices. Including a metric for per-student expenditure is an unfortunate relic from old-fashioned input-based rankings. It’s a problem not just because it mixes inputs and outcomes. More significantly, it clouds the focus on results, giving an advantage at the starting gate to incumbents that simply have a lot of money, even if other institutions are achieving better results more economically. That’s counterproductive when the goal should be to identify and reward institutional efficiency and affordability measures that generate good results as quickly as possible.
Even Better Questions
But it’s not too late. The tool already allows sorting by the component pillars, and Times Higher Education plans to work with the data to explore relationships and additional questions. WSJ/THE could rerun their analysis without the wealth measure and its conservative influence to see whether and how that alters the rankings. It will be interesting to see whether publics rise if resources are factored out. I’d like to think that a true results-based version would surface colleges and universities, perhaps a bit lesser known, that outperform their spending and bank accounts. Whether institutions achieve that through innovation, culture, dedication or some other advantages or efficiencies, it’s well worth a look.
These rankings include a new dimension, using a new 100,000-student survey that asks students about the opportunities their colleges and universities provide to engage in high-impact education practices. The survey also asks about students’ satisfaction and how enthusiastically they would recommend their institutions. We definitely need fresh ways to understand such education outcomes as well-being, lifelong learning, competence and satisfaction. How effectively does this survey, or the Gallup-Purdue survey, answer that need?
My first question is whether this really fits the sponsors’ desire to focus on outcome measures. Is it just an input or process measure, albeit an interesting one? I’d also like to know more about the relationship of opportunities to actual engagement in those practices, and I wonder how seriously or consistently students answered the questions. What does it mean to different students to have opportunities to “meet people from other countries,” for example, and does simply meeting them make for a more successful education?
This is also my chance to ask about the strength of the causal links between the opportunity to participate in a particular educational practice (whether an internship or speaking in class) and the outcomes, from learning to job getting and performance, with which they may be associated. This question applies to questions with the WSJ/THE “Did you have an opportunity to …” format, and is most vivid for the Gallup-Purdue study. Maybe the people whose personalities and preferences incline them to choose projects that give them close, long-term connections with faculty members -- or whom faculty members chose for those projects based on characteristics including charm, social capital or prior experiences -- are engaging, optimistic and, yes, privileged in ways that would also make them engaged and healthy later in life. Was the involvement with those practices in college really causal?
To evaluate the WSJ/THE question about recommending the college or university to others, it would be valuable to know more about the basis for the students’ responses: Were they thinking about academic or quality-of-life considerations? Past experience or projections for how their educations would serve them later? I wonder if their replies were colored by an intuition that their answers could affect their institution’s standing, thus kicking in both their loyalty and self-serving desire to promote it. In short, it’s hard to know how much these new measures tell us. But it’s a worthy effort to build new tools beyond the few crude metrics we have now for understanding differences among institutions.
Serious Work to Do
On a broader level, none of the ratings and rankings have made much progress in expressing learning outcomes, although we urgently need more powerful and sensitive ways to articulate them. Whether or not students have developed the knowledge and skills, the capacities and problem-solving abilities, appropriate to their program should be at the heart of any assessment of educational results.
It’s also time to move beyond Washington Monthly’s good but simple additions to capture intangible and societal outcomes more successfully. By that failure of imagination, not only we ratings designers, but also all of us in higher education, have allowed income to take center stage among outcomes -- and played into the damaging transformation in perception of higher education from a public to a private good.
I said many times in the Scorecard conversation that it’s sensible and understandable for families to want to know if an educational experience would typically generate what they would consider a decent living, including the ability to handle the loans assumed to pay for it, and how employment or income results compare across programs or institutions. But, I went on, that does not mean that income can stand alone as though that’s all that matters. If an affordable housing advocate or journalist is satisfied in her preparation to do that work, or a dancer or teacher got exactly the education he needs to pursue his goals and be a good citizen, how can we measure and reflect that? This is not a news flash but an echo: we need better ways to communicate with families and students about how higher education makes a difference to both the student and society far beyond just economic returns.
There’s other work to be done in the college information enterprise. This month, as the Scorecard celebrated its first anniversary, the Education Department marked the occasion with a data refresh and also wove together new partnerships to expand the Scorecard’s value. One of the big challenges for any useful college information source is to make sure it reaches the students who need it the most: the ones who aren’t sure if, or may already doubt that, they can afford college, who know least about the options available, who are uncertain about the outcomes they can expect from college or at different schools. They’re the ones who suffer from the severe “guidance gap” at far too many high schools and among many adult populations. This intensified collaboration among government, counselor organizations and higher education institutions at every level is a wise strategy.
Ultimately, however, the most transformative role of the well-conceived rankings and scorecards will turn out to lie not in whether every student reads them but in their value in supporting institutional improvement. They are a gold mine for benchmarking and can help institutions choose which outcomes really matter and then work across functions to improve them.
The fact is that for decades colleges have invested far too much energy striving -- or replaying games -- to get better on measures that don’t really matter. Asking smarter questions about genuinely significant priorities can help us graduate more low-income students into rewarding work and find affordable paths to solid learning outcomes for citizens and workers. Better ratings, continuously improved and building on each other’s contributions, give us a chance to put higher education’s intense competitive energies into worthy races.