Many of my fellow college presidents remain worried about the Obama Administration’s proposed (and still being developed) rating system for higher education. While Education Department officials have been responsive and thoughtful about our concerns, many among us fundamentally do not trust government to get this right.
Or anyone, for that matter. After all, we already have lots of rating systems and they mostly seem flawed -- some, like U.S. News and World Report, extremely so. Institutions game the system in various ways. Rarely do rating systems capture the complexity of the industry with its rich mix of institutions, missions, and student markets served. Almost always, they are deeply reductionist.
On the other hand, higher education mostly resists transparency, good data sharing, and accountability. I may be with the minority of my peers that actually support some kind of rating system, but I am with the majority in my worry about what will get measured and how. Take the proposed gainful employment regulations, for example. My approach to accountability dictates that you hold me accountable for what I can control. I can’t control the labor market (can I hold government accountable for that piece?), the willingness of a graduate to move for a job, or the ridiculously low wages our society pays teachers and social workers. I can control the level of preparedness my students have as they enter their chosen field. So hold me accountable for the latter, but not the former.
I’ve always thought that a rating system that does not adjust for the student being served is an inherently flawed system. It often fails to capture the real value-add of an education. For example, if we could measure how far a student has moved intellectually, developmentally, and professionally, I might argue that Harvard and Yale would rank near the bottom of such a rating system, while a Rio Salado College might rank near the top, at least in terms of how far they move students educationally. After all, if you take the top 1 percent of high school graduates, how much actual educational value have you added (social value, value-added network, status and so on are other matters, of course)? Or perhaps more kindly, the educational success of these students has a lot more to do with them and the other high performers around them than with Harvard – they would thrive anywhere they found themselves.
Yet it seems certain that we will have a rating system. So I’ve played with the rough outlines of a rating system that is student-centered and program-based, and that places institutions (by program) on a matrix that tells us more than a simple score. Any rating system will answer some questions and ignore others. The questions I want my system to answer are these:
- How does an institution, at the program level (which is more important than institution), serve various student profiles (because students are not at all alike)?
- Do students who also fit my profile graduate in large numbers, find jobs, carry a lot of debt, get paid well enough?
- How does the institution perform over all (as opposed to on the program level) on those questions, the government’s concern?
I’d design the system on these two axes:
Student matching. This depends on creating a student profile, a combination of academic and financial factors and perhaps other items we think might be important (Gender? Race? Age? Veteran?).
High/low risk and resource = Level of aid needed + HS GPA +HS rating
Program success. The success of the program in terms of placing students in a related field one year after graduation (students do all sorts of things immediately after graduation -- we need to filter out that noise), their earnings one year after graduation, the percentage of students who graduate, and the cost of the program.
Success rate by program = grad rate + % of grads working in the related field or field of choice + avg earnings + net cost +average debt
Because the system uses the student as the lens of interpretation, a student with a high risk profile (let’s say a 2.8 HS GPA from a lower-ranked high school, with a family income of less than $40,000) looking at three schools offering Secondary Education programs might see this kind of comparison:
|School A||School B||School C|
|% of Like Students||95%||40%||35%|
|% Working in Field||28%||85%||75%|
In the above example, School A looks like certain poorly performing for-profits while School C might be the profile for a public institution. School B, a private institution, leaves its graduates with more debt than does public School C, but they graduate more of their students and place them more effectively. So the student has some tradeoffs to consider.
In contrast, a low-risk student with ample resources (say a 3.6 GPA from a good high school and a family income of $80,000) looking at the same program might see a different report for School A and B and, in this case, they have substituted an elite institution for School C.
|School A||School B||School C|
|% of Like Students||3%||30%||12%|
|% Working in Field||40%||85%||95%|
For this second student, elite School C is a tough choice in terms of admissions, and if the student matriculates, it will provide less aid than it will for a very high-need student. On the other hand, less selective School B provides more merit aid for a student with this profile and drives down long-term indebtedness in comparison to the first student (a practice that is common and often criticized).
School A in both cases is a for-profit and not a very good one. So many of the bad agents in the for-profit world take very high-risk students, charge them a lot, and don’t graduate enough of them. Those that earn a credential too often fail to land jobs in their field. Or, in the case of more generalized liberal arts fields, in jobs they would not otherwise choose. They would look a lot like School A in the above examples. But a better for-profit player like Capella University or DeVry University would land closer to Southern New Hampshire University (the institution I lead), something like School B.
In contrast, an elite School C (think Princeton or Harvard) mostly takes very low-risk/high-resource students and charges them quite a bit (or funds them fully if they are among the small number of poor students they accept). Their graduates do quite well. My proposed system would reveal that School C has a high success rate over all (including for high-risk, low-resource students, but it just doesn’t serve very many of them). The second student would be better-served to look at School C in the first example, a public college.
The key is to give an interested student the tools to accurately access where they are on the student profile analysis so they get the best match of schools for them when considering programmatic performance. Then they could, by program (and degree level), find the institutions that serve them best. Most importantly, the starting point for using the system is the student. A typical SNHU student would struggle at Harvard -- we are in fact the better institution for that student profile. Very high-risk or low-resource students might be better served at a community college than at SNHU, where our higher cost might be a much bigger burden and a heavier price to pay should they not graduate. Those same students might be better served at Harvard if they are academically prepared, but very poor (which would not likely burden them with a lot of debt), but they’d see that Harvard has very few spots for them.
Such a system could make the student profile piece easy to use through a simple heuristic that identifies key data points (Name of your high school and city/town where it is located; your current GPA or average grade level; did one or both of your parents complete a college degree?). Ideally the system would pull family financial information from the last tax return. Rating high school quality might be a challenge, but I bet there are rankings or state ratings that could be employed.
What I’ve outlined above can be the base analysis, but I think it would fairly easy to add “filters” to the data for other factors. For example, an institution might do a pretty good job of graduating most students, but a filter for minority students might reveal a much lower graduation rate at a given institution. We could have an interesting discussion about what those others filters might be (veterans? gender? first generation? age bracket?). This is an area that needs to be carefully thought-out -- we don’t want the unintended outcome to be students of a given “type” self-selecting out of institutions because they confuse group identity metrics with their own talents and drive. This is complicated territory.
There is one other important variation to consider. The system fails to capture or address those students who go onto further degree study instead of entering the job market, so a community college or a liberal arts college that sends many directly on to grad school would be unfairly hurt in terms of employment and earnings if this population wasn’t separated out. For the job-seeking population, things like graduation rates would be calculated separately for those seeking work after graduation and those declaring their intent to go onto the next degree level (four-year degrees for community college graduates and masters or doctoral programs for four-year degree graduates). The latter would be kept out of the denominator for the job seeking analysis and thus community colleges and schools sending students onto graduate programs would be more accurately represented.
I know I don’t have this quite right yet, but I think it might work with some more refining. Some might argue that “percentage of graduates working in a related field” is just too hard to capture (especially for liberal arts programs), so maybe simply employed and at what level of earnings would suffice in Version 1. Instead of the “average earnings one year after graduation,” one could use a simple metric like “percentage above or below the national media wage,” which is about $35,000. Some would ask it to measure only one student profile: how programs (institutions) serve high-risk students. After all, low-risk students do pretty well and high-risk students are often served very poorly and fail at high rates. It’s where we waste enormous amounts of federal dollars.
Even in its broadly sketched form, this kind of rating systems does a number of things:
- It reframes the question of institutional performance to program performance (which matters a whole lot more to students) while still allowing regulators to roll up program performance into an aggregate institutional profile if they wish;
- It squares program performance with student profile, recognizing that different institutions work better for different students -- a much more nuanced presentation of the challenge;
- The idea of filters allows for students to go deeper, perhaps discovering that while Program X has good outcomes, perhaps for minority students or veterans it does less well, which is an important insight if you are a minority student or a veteran (though we’d have to be wary of small numbers in generalizing in many cases, itself another issue);
- It avoids the oversimplification of a single score, which then becomes an oversimplified rating system that fails to take into account the variety of institutional types, missions, and student markets at work in higher education;
- It allows the government to call out poor performers;
- It allows institutions to have a more robust discussion about their programs, something they do poorly, by and large.
Some of what I propose would be difficult to execute and would require some hard thinking about to how to get the data. For example, tracking job placement is devilishly hard, but databases like LinkedIn are making it much easier and the University of Texas has unveiled a new system that achieves much of what I outline (yes, I am conceding the employment metric, despite my objections – the government is likely to demand it, after all). Tax returns can also be accessed in useful ways. The College Scorecard captures some of what we would need. In a state like Massachusetts, a combination of MCAS scores, per pupil spending, and percent going onto college could help rate high schools. My point is that most of the necessary data is available. In all cases, every ratings system has its problems and we need to choose what execution challenges we wish to sort out. I prefer problems of execution rather than problems of oversimplifcation.
To that end, I know this idea doesn’t address important aspects of higher education. For example, our important role in civic engagement, measuring critical thinking skills, finding a sense of calling, and much more. But those are not the big questions the administration is asking us to address.
For those questions, I wonder if a system like the one I’ve sketched above might give us a richer understanding -- a student-centered understanding – of institutional effectiveness that works far better than some of what is being described today. It is a near-certainty that we will have a ratings system, so let’s at least have one that focuses the question through the lens of students and that captures the complexity that is higher education today.
Paul J. LeBlanc is president of Southern New Hampshire University.
Read more by
Opinions on Inside Higher Ed
Inside Higher Ed’s Blog U
What Others Are Reading