Submitted by Ben Miller on September 3, 2013 - 3:00am
After a month of speculation, President Obama unveiled his plan to “shake up” higher education last week. As promised, the proposal contained some highly controversial elements, none greater than an announcement that the U.S. Department of Education will begin to rate colleges and universities in 2015 and tie financial aid to those results three years later. The announcement prompted typical clichéd Beltway commentary from the higher education industry of “the devil is in the details” and the need to avoid “unintended consequences,” which should rightfully be attributed as, “We are not going to outright object now when everyone’s watching but instead will nitpick to death later.”
But the ratings threat is more substantive than past announcements to put colleges “on notice,” if for no other reason than it is something the department can do without Congressional approval. Though it cannot actually tie aid received directly to these ratings without lawmakers (and the threat to do so would occur after Obama leaves office), the department can send a powerful message both to the higher education community and consumers nationwide by publishing these ratings.
Ratings systems, however, are no easy matter and require lots of choices in their methodologies. With that in mind, here are a few recommendations for how the ratings should work.
Ratings aren’t rankings.
Colleges have actually rated themselves in various forms for well over a hundred years. The Association of American Universities is an exclusive club of the top research universities that formed in 1900. The more in-depth Carnegie classifications, which group institutions based upon their focus and types of credentials awarded, have been around since the early 1970s. Though they may not be identified as such by most people, they are forms of ratings — recognitions of the distinctions between universities by mission and other factors.
A federal rating system should be constructed similarly. There’s no reason to bother with ordinal rankings like the U.S. News and World Report because distinguishing among a few top colleges is less important than sorting out those that really are worse than others. Groupings that are narrow enough to recognize differences but sufficiently broad to represent a meaningful sample are the way to go. The Department could even consider letting colleges choose their initial groupings, as some already do for the data feedback reports the Department produces through the Integrated Postsecondary Education Data System (IPEDS).
It’s easier to find the bottom tail of the distribution than the middle or top.
There are around 7,000 colleges in this country. Some are fantastic world leaders. Others are unmitigated disasters that should probably be shut down. But the vast majority fall somewhere in between. Sorting out the middle part is probably the hardest element of a ratings system — how do you discern within averageness?
We probably shouldn’t. A ratings system should sort out the worst of the worst by setting minimum performance standards on a few clear measures. It would clearly demonstrate that there is some degree of results so bad thatit merits being rated poorly. This standard could be excessively, laughably low, like a 10 percent graduation rate. Identifying the worst of the worst would be a huge step forward from what we do now. An ambitious ratings system could do the same thing on the top end using different indicators, setting very high bars that only a tiny handful of colleges would reach, but that’s much harder to get right.
Don’t let calls for the “right” data be an obstructionist tactic.
Hours after the President’s speech, representatives of the higher education lobby stated the administration’s ratings “have an obligation to perfect data.” It’s a reasonable requirement that a rating system not be based only on flawed measures, like holding colleges accountable just for the completion of first-time, full-time students. But the call for perfect data is a smokescreen for intransigence by setting a nearly unobtainable bar. Even worse, the very people calling for this standard are the same ones representing the institutions that will be the biggest roadblock to obtaining information fulfilling this requirement. Having data demands come from those keeping it hostage creates a perfect opportunity for future vetoes in the name of making perfect be the enemy of the good. It’s also a tried and true tactic from One Dupont Circle. Look at graduation rates, where the higher education lobby is happy to put out reports critiquing their accuracy after getting Congress to enact provisions that banned the creation of better numbers during the last Higher Education Act reauthorization.
To be sure, the Obama administration has an obligation to engage in an open dialogue with willing partners to make a good faith effort at getting the best data possible for its ratings. Some of this will happen anyway thanks to improvements to the department’s IPEDS database. But if colleges are not serious about being partners in the ratings and refuse to contribute the data needed, they should not then turn around and complain about the results.
Stick with real numbers that reflect policy goals.
Input-adjusted metrics are a wonk’s dream. Controlling for factors and running regressions get us all excited. But they’re also useless from a policy implementation standpoint. Complex figures that account for every last difference in institutions will contextualize away all meaningful information until all that remains is a homogenous jumble where everyone looks the same. Controlling for socioeconomic conditions also runs the risk of just inculcating low expectations for students based upon their existing results. Not to mention any modeling choices in an input-adjusted system will add another dimension of criticism to the firestorm that will already surround the measures chosen.
That does not mean context should be ignored. There are just better ways to handle it. First and foremost is making ratings on measures based on performance relative to peers. Well-crafted peer comparisons can accomplish largely the same thing as input adjustment since institutions would be facing similar circumstances, but still rely on straightforward figures. Second, unintended consequences should be addressed by measuring them with additional metrics and clear goals. For example, afraid that focusing on a college's completion rate will discourage enrolling low-income students or unfairly penalize those that serve large numbers of this type of students? The ratings should give institutions credit for the socioeconomic diversity of their student body, require a minimum percentage of Pell students, and break out the completion rate by familial income. Doing so not only provides a backstop against gaming, it also lays out clearer expectations to guide colleges' behavior, something the U.S. News rankings experience has shown that colleges clearly know how to do with less useful measures like alumni giving (sorry, Brown, for holding you back on that one).
Mix factors a college can directly control with ones it cannot.
Institutions have an incentive to improve on measures included in a rating system. But some subset of colleges will also try to evade or “game” the measure. This is particularly true if it’s something under their control — look at the use of forbearances or deferments to avoid sanctions under the cohort default rate. No system will ever be able to fully root out gaming and loopholes, but one way to adjust for them is by complementing measures under a college’s control with ones that are not. For example, concerns about sacrificing academic quality to increase graduation rates could be partially offset by adding a focus on graduates’ earnings or some other post-completion behavior that is not under the college’s control. Institutions will certainly object to being held accountable for things they cannot directly influence. But basing the uncontrollable elements on relative instead of absolute performance should further ameliorate this concern.
Focus on outputs but don’t forget inputs.
Results matter. An institution that cannot graduate its students or avoid saddling them with large loan debts they cannot repay upon completion is not succeeding. But a sole focus on outputs could encourage an institution to avoid serving the neediest students as a way of improving its metrics and undermine the access goals that are an important part of federal education policy.
To account for this, a ratings system should include a few targeted input metrics that reflect larger policy goals like socioeconomic diversity or first-generation college students. Giving colleges “credit” in the ratings for serving the students we care most about will provide at least some check against potential gaming. Even better, some metrics should have a threshold a school has to reach to avoid automatic classification into the lowest rating.
Put it together.
A good ratings system is both consistent and iterative. It keeps the core pieces the same year to year but isn’t too arrogant to include new items and tweak ones that aren’t working. These recommendations present somewhere to start. Group the schools sensibly — maybe even rely on existing classifications like those done by Carnegie. The ratings should establish minimum performance thresholds on the metrics we think are most indicative of an unsuccessful institution — things like completion rates, success with student loans, time to degree, etc. They should consist of outcomes metrics that reflect their missions—such as transfer success for two-year schools, licensure and placement for vocational offerings, earnings, completion and employment for four-year colleges and universities. But they should also have separate metrics to acknowledge policy challenges we care about — success in serving Pell students, the ability to get remedial students college-ready, socioeconomic diversity, etc. — to discourage creaming. The result should be something that reflects values and policy challenges, acknowledges attempts to find workarounds, and refrains from dissolving into wonkiness and theoretical considerations that are divorced from reality.
Ben Miller is a senior policy analyst in the New America Foundation's education policy program, where he provides research and analysis on policies related to postsecondary education. Previously, Miller was a senior policy advisor in the Office of Planning, Evaluation, and Policy Development in the U.S. Department of Education.
In my work as Oregon’s college evaluator, I am often asked why state approval is not "as good as accreditation" or "equivalent to accreditation."
We may be about to find out, to our sorrow: One version of the Higher Education Act reauthorization legislation moving through Congress quietly allows states to become federally recognized accreditors. A senior official in the U.S. Department of Education has confirmed that one part of the legislation would eliminate an existing provision that says state agencies can be recognized as federally approved accreditors only if they were recognized by the education secretary before October 1, 1991. Only one, the New York State Board of Regents, met the grandfather provision. By striking the grandfather provision, any state agency would be eligible to seek recognition.
If such a provision becomes law, we will see exactly why some states refuse to recognize degrees issued under the authority of other states: It is quite possible to be state-approved and a low-quality degree provider.Which states allow poor institutions to be approved to issue degrees?
Here are the Seven Sorry Sisters: Alabama (split authority for assessing and recognizing degrees), Hawaii (poor standards, excellent enforcement of what little there is), Idaho (poor standards, split authority), Mississippi (poor standards, political interference), Missouri (poor standards, political interference), New Mexico (grandfathered some mystery degree suppliers) and of course the now infamous Wyoming (poor standards, political indifference or active support of poor schools).
Wyoming considers degree mills and other bottom-feeders to be a source of economic development. You’d think that oil prices would relieve their need to support degree mills. Even the Japanese television network NHK sent a crew to Wyoming to warn Japanese citizens about the cluster of supposed colleges there: Does the state care so little for foreign trade it does not care that 10 percent of the households in Japan saw that program? You’d think that Vice President Dick Cheney and U.S. Senator Mike Enzi, who now chairs the committee responsible for education, would care more about the appalling reputation of their home state. Where is Alan Simpson when we need him?
In the world of college evaluation, these seven state names ring out like George Carlin’s “Seven Words You Can’t Say On Television,” and those of us responsible for safeguarding the quality of degrees in other states often apply some of those words to so-called “colleges” approved to operate in these states -- so-called “colleges” like Breyer State University in Alabama and Idaho (which “State” does this for-profit represent, anyway?).
There are some dishonorable mentions, too, such as California, where the standards are not bad but enforcement has been lax and the process awash in well-heeled lobbyists. The new director of California’s approval agency, Barbara Ward, seems much tougher than recent placeholders -- trust someone trained as a nurse to carry a big needle and be prepared to use it.
The obverse of this coin is that in some states, regulatory standards are higher than the standards of national accreditors, as Oregon discovered when we came across an accredited college with two senior officials sporting fake degrees. The national accreditors, the Accrediting Commission of Career Schools and Colleges of Technology and the Accrediting Bureau of Health Education Schools, had not noticed this until we mentioned it to them. What exactly do they review, if they completely ignore people’s qualifications?
The notion that membership in an accrediting association is voluntary is, of course, one of the polite fictions that higher education officials sometimes say out loud when they are too far from most listeners to inspire a round of laughter. In fact, losing accreditation is not far removed from a death sentence for almost any college, because without accreditation, students are not eligible for federal financial aid, and without such aid, most of them can’t go to school – at least to that school.
For this reason, if Congress ever decoupled aid eligibility from accreditation by one of the existing accreditors -- for example, by allowing state governments to become accreditors -- the “national” accreditors of schools would dry up and blow away by dawn the next day: They serve no purpose except as trade associations and milking machines for federal aid dollars.
The Libertarian View of Degrees
One view of the purpose and function of college degrees suggests that the government need not concern itself with whether a degree is issued by an accredited college or even a real college. This might be considered the classic libertarian view: that employers, clients and other people should come to their own conclusions, based on their own research, regarding whether a credential called a “degree” by the entity that issued (or printed) it is appropriate for a particular job or need. This view is universally propounded by the owners of degree mills, who become wealthy by selling degrees to people who think they can get away with using them this way.
The libertarian view is tempting, but presupposes a capacity and inclination to evaluate that most employers have always lacked and always will, while of course an average private citizen is even more removed from that ability and inclination. Who will actually do the research that the hypothetical perfect employer should do?
Consider the complexities of the U.S. accreditation system, the proliferation of fake accreditors complete with names nearly identical to real ones (there were at least two fake DETCs, imitating the real Distance Education Training Council, in 2005), phone numbers, carefully falsified lists of approved schools, Web sites showing buildings far from where the owners had ever been and other accoutrements.
To the morass of bogus accreditors in the U.S., add the world. Hundreds of jurisdictions, mostly not English-speaking, issuing a bewildering array of credentials under regimens not quite like American postsecondary education. Add a layer of corruption in some states and countries, a genial indifference in others, a nearly universal lack of enforcement capacity and you have a recipe for academic goulash that even governments are hard-pressed to render into proper compartments. In the past 10 days my office has worked with national officials in England, Sweden, The Netherlands, Canada and Australia to sort out suspicious degree validations. Very few businesses and almost no private citizens are capable of doing this without an exhausting allocation of time and resources. It does not and will not happen.
Should state governments accredit colleges?
State governments, not accreditors or the federal government, are the best potential guarantors of degree program quality at all but the major research universities, but only if they take their duty seriously, set and maintain high standards and keep politicians from yanking on the strings of approval as happens routinely in some states. Today, fewer than a dozen states have truly solid standards, most are mediocre and several, including the Seven Sorry Sisters, are quite poor.
If Congress is serious about allowing states to become accreditors, there must be a reason. I can think of at least two reasons. First, such an action would kill off many existing accreditors without having their work added to the U.S. Department of Education (which no one in their right mind, Democrat, Republican or Martian, wants to enlarge). This would count as devolutionary federalism (acceptable to both parties under the right conditions).
The second reason is the one that is never spoken aloud. There will be enormous, irresistible pressure on many state governments to accredit small religious schools that could never get accredited even by specialized religious accreditors today. The potential bounty in financial aid dollars for all of those church-basement colleges is incalculable.
Remember that another provision of the same proposed statute would prohibit even regionally accredited universities from screening out transfer course work based on the nature of the accreditor. Follow the bread crumbs and the net result will be a huge bubble of low-end courses being hosed through the academic pipeline, with the current Congressional leadership cranking the nozzle.
The possibility of such an outcome should provide impetus to the discussions that have gone on for many years regarding the need for some uniformity (presumably at a level higher than that of the Seven Sorry Sister states) in standards for state approval of colleges. We need a “model code” for state college approvals, something that leading states can agree to (with interstate recognition of degrees) and that states with poor standards can aspire to.
The universe of 50 state laws, some excellent and some abysmal, allows poor schools to venue-shop and then claim that their state approval makes them good schools when they are little better than diploma mills. We must do better.
Should states accredit colleges? Only if they can do it well. Today’s record is mixed, and Congress should not give states the power to accredit (or allow the Department of Education to give states the power) until they have proven that their own houses are in order. That day has not yet come.
Alan L. Contreras
Alan L. Contreras has been administrator of the Oregon Office of Degree Authorization, a unit of the Oregon Student Assistance Commission, since 1999. His views do not necessarily represent those of the commission.
Accountability, not access, has been the central concern of this Congress in its fitful efforts to reauthorize the Higher Education Act. The House of Representatives has especially shown itself deaf to constructive arguments for improving access to higher education for the next generation of young Americans, and dizzy about what sensible accountability measures should look like. The version of the legislation approved last week by House members has merit only because it lacks some of the strange and ugly accountability provisions proposed during the past three years, though a few vestiges of these bad ideas remain.
Why should colleges and universities be subject to any scheme of accountability? Because the Higher Education Act authorizes billions of dollars in grants and loans for lower-income students as it aims to make college accessible for all. This aid goes directly to students selecting from among a very broad array of institutions: private, public and proprietary; small and large; residential, commuter and on-line. Not unreasonably, the federal government wants to ensure that the resources being provided are used only at credible institutions. Hence, its insistence on accountability.
The financial limits on student aid were largely set in February when Congress hacked $12 billion from loan funds available to many of those same low-income students. With that action, the federal government shifted even more of the burden of access onto families and institutions of higher education, despite knowing that the next generation of college aspirants will be both significantly more numerous and significantly less affluent.
Now the Congress is at work on the legislation’s accountability provisions, and regardless of allocating far fewer dollars members of both chambers are considering still more intrusive forms of accountability. They appear to have been guided by no defensible conception of what is appropriate accountability.
Colleges and universities serve an especially important role for the nation -- a public purpose -- and they do so whether they are public or private or proprietary in status. The nation has a keen interest in their success. And in an era of heightened economic competition from the European Union, China, India and elsewhere, never has that interest been stronger.
In parallel with other kinds of institutions that serve the public interest, colleges and universities should make themselves publicly accountable for their performance in four dimensions: Are they honest, safe, fair, and effective? These are legitimate questions we ask about a wide variety of businesses: food and drug companies, banks, insurance and investment firms, nursing homes and hospitals, and many more.
Are they honest? Is it possible to read the financial accounts of colleges and universities to see that they conduct their business affairs honestly and transparently? Do they use the funds they receive from the federal government for the intended purposes?
Are they safe? Colleges and universities can be intense environments. Especially with regard to residential colleges and universities, do students face unacceptable risks due to fire, crime, sexual harassment or other preventable hazards?
Are they fair? Do colleges and universities make their programs genuinely available to all, without discrimination on grounds irrelevant to their missions? Given this nation’s checkered history with regard to race, sex, and disability, this is a kind of scrutiny that should be faced by any public-serving institution.
Existing federal laws quite appropriately govern measures dealing with all of these issues already. For the most part, accountability in each area can best be accomplished by asking colleges and universities to disclose information about their performance in a common and, hopefully, simple manner. No doubt measures for dealing with this required disclosure could be improved. But these three questions have not been the focus of debate during this reauthorization.
On the other hand, Congress has devoted considerable attention to a question that, while completely legitimate, has been poorly understood:
Are they effective? Do students who enroll really learn what colleges and universities claim to teach? This question should certainly be front and center in the debate over accountability.
Institutions of higher education deserve sharp criticism for past failure to design and carry out measures of effectiveness. Broadly speaking, the accreditation process has been our approach to asking and answering this question. For too long, accreditation focused on whether a college or university had adequate resources to accomplish its mission. This was later supplanted by a focus on whether an institution had appropriate processes. But over the past decade, accreditation has finally come to focus on what it should -- assessment of learning.
An appropriate approach to the question of effectiveness must be multiple, independent and professionally grounded. We need multiple measures of whether students are learning because of the wide variety of kinds of missions in American higher education; institutions do not all have identical purposes. Whichever standards a college or university chooses to demonstrate effectiveness, they should not be a creation of the institution itself -- nor of government officials -- but rather the independent development of professional educators joined in widely recognized and accepted associations.
Earlham College has used the National Survey of Student Engagement since its inception. We have made significant use of its findings both for re-accreditation and for improvement of what we do. We are also now using the Collegiate Learning Assessment. I believe these are the best new measures of effectiveness, but we need many more such instruments so that colleges and universities and choose the ones most appropriate to assessing fulfillment of learning in the scope of their particular missions.
Until the 11th hour, the House version of the Higher Education Act contained a provision that would have allowed states to become accreditors, a role they are ill equipped to play. Happily, that provision now has been eliminated. Meanwhile, however, the Commission on the Future of Higher Education, appointed by U.S. Secretary of Education Margaret Spellings, is flirting with the idea of proposing a mandatory one-size-fits-all national test.
Much of the drama of the accountability debate has focused on a fifth and inappropriate issue: affordability. Again until the 11th hour, the House version of the bill contained price control provisions. While these largely have been removed, the bill still requires some institutions that increase their price more rapidly than inflation to appoint a special committee that must include outsiders to review their finances. This is an inappropriate intrusion on autonomy, especially for private institutions.
Why is affordability an inappropriate aspect of accountability? Because in the United States we look to the market to “get the prices right,” not heavy-handed regulation or accountability provisions. Any student looking to attend a college or university has thousands of choices available to him or her at a range of tuition rates. Most have dozens of choices within close commuting distance. There is plenty of competition among higher education institutions.
Let’s keep the accountability debate focused on these four key issues: honesty, safety, fairness, and effectiveness. With regard to the last and most important of these, let’s put our best efforts into developing multiple, independent, professionally grounded measures. And let’s get back to the other key issue, which is: How do we provide access to higher education for the next generation of Americans?
Douglas C. Bennett is president and professor of politics at Earlham College, in Indiana.