Higher Ed Act Reauthorization

With 2 hearings, Congress takes first steps toward rewriting Higher Education Act

Smart Title: 

With concurrent hearings, Congress takes the first tentative steps toward reauthorizing the Higher Education Act.

Senate-sponsored report calls for simplified, and fewer, regulations on colleges

Smart Title: 

A Senate-sponsored task force releases blueprint for Congress to scale back federal requirements that higher education leaders have long said are overly burdensome. 

Departing Senator Harkin releases his Higher Ed Act reauthorization bill

Smart Title: 

Among the new provisions in the retiring Democrat's final proposal to rewrite the Higher Education Act are a student unit record system and incentives for colleges to graduate more Pell Grant recipients. 

Ratings and scorecards: the wrong kind of higher ed accountability (essay)

Calls for scorecards and rating systems of higher education institutions that have been floating around Washington, if used for purposes beyond providing comparable consumer information, would make the federal government an arbiter of quality and judge of institutional performance.

This change would undermine the comprehensive, careful scrutiny currently provided by regional accrediting agencies and focus on cursory reviews.

Regional accreditors provide a peer-review process that sparks an investigation into key challenges institutions face to look beyond symptoms for root causes. They force all providers of postsecondary education to investigate closely every aspect of performance that is crucial to strengthening institutional excellence, improvement, and innovation. If you want to know how well a university is really performing, a graduation rate will only tell you so much.

But the peer-review process conducted by accrediting bodies provides a view into the vital systems of the institution: the quality of instruction, the availability and effectiveness of student support, how the institution is led and governed, its financial management, and how it uses data.

Moreover, as part of the peer-review process, accrediting bodies mobilize teams of expert volunteers to study governance and performance measures that encourage institutions to make significant changes. No government agency can replace this work, can provide the same level of careful review, or has the resources to mobilize such an expert group of volunteers. In fact, the federal government has long recognized its own limitations and, since 1952, has used accreditation by a federally recognized accrediting agency as a baseline for institutional eligibility for Title IV financial-aid programs.

Attacked at times by policy makers as an irrelevant anachronism and by institutions as a series of bureaucratic hoops through which they must jump, the regional accreditors’ approach to quality control has rather become increasingly more cost-effective, transparent, and data- and outcomes-oriented.

Higher education accreditors work collaboratively with institutions to develop mutually agreed-upon common standards for quality in programs, degrees, and majors. In fact, in the Southern region, accreditation has addressed public and policy maker interests in gauging what students gain from their academic experience by requiring, since the 1980s, the assessment of student learning outcomes in colleges. Accreditation agencies also have established effective approaches to ensure that students who attend institutions achieve desired outcomes for all academic programs, not just a particular major.

While the federal government has the authority to take actions against institutions that have proven deficient, it has not used this authority regularly or consistently. A letter to Congress from the American Council on Education and 39 other organizations underscored the inability of the U.S. Department of Education to act with dispatch, noting that last year the Department announced “it would levy fines on institutions for alleged violations that occurred in 1995 -- nearly two decades prior.”

By contrast, consider that in the past decade, the Southern Association of Schools and Colleges Commission on Colleges stripped nine institutions of their accreditation status and applied hundreds of sanctions to all types of institutions (from online providers to flagship campuses) in its region alone. But, when accreditors have acted boldly in recent times, they been criticized by politicians for going too far, giving accreditors the sense that we’re “damned if we do, damned if we don’t.”

The Problem With Simple Scores

Our concern about using rating systems and scorecards for accountability is based on several factors. Beyond tilting the system toward the lowest common denominator of quality, rating approaches can create new opportunities for institutions to game the system (as with U.S. News & World Report ratings and rankings) and introduce unintended consequences as we have seen occur in K-12 education.

Over the past decade, the focus on a few narrow measures for the nation’s public schools has not led to significant achievement gains or closing achievement gaps. Instead, it has narrowed the curriculum and spurred the current public backlash against overtesting. Sadly, the data generated from this effort have provided little actionable information to help schools and states improve, but have actually masked -- not illuminated -- the root causes of problems within K-12 institutions.

Accreditors recognize that the complex nature of higher education requires that neither accreditors nor the government should dictate how individual institutions can meet desired outcomes. No single bright line measure of accountability is appropriate for the vast diversity of institutions in the field, each with its own unique mission. The fact that students often enter and leave the system and increasingly earn credits from multiple institutions further complicates measures of accountability.

Moreover, setting minimal standards will not push institutions that think they are high performing to get better. All institutions – even those considered “elite” – need to work continually to achieve better outcomes and should have a role in identifying key outcomes and strategies for improvement that meet their specific challenges.

Accreditors also have demonstrated they are capable of addressing new challenges without strong government action. With the explosion of online providers, accreditors found a solution to address the challenges of quality control for these programs. Accrediting groups partnered with state agencies, institutions, national higher education organizations, and other stakeholders to form the State Authorization Reciprocity Agreements, which use existing regional higher education compacts to allow for participating states and institutions to operate under common, nationwide standards and procedures for regulating postsecondary distance education. This approach provides a more uniform and less costly regulatory environment for institutions, more focused oversight responsibilities for states, and better resolution of complaints without heavy-handed federal involvement.

Along with taking strong stands to sanction higher education institutions that do not meet high standards, regional accreditors are better-equipped than any centralized governmental body at the state or national level to respond to the changing ecology of higher education and the explosion of online providers.

We argue for serious -- not checklist -- approaches to accountability that support improving institutional performance over time and hold institutions of all stripes to a broad array of criteria that make them better, not simply more compliant.

Belle S. Wheelan is president of the Southern Association of Colleges and Schools Commission on Colleges, the regional accrediting body for 11 states and Latin America. Mark A. Elgart is founding president and chief executive officer for AdvancED, the world’s largest accrediting body and parent organization for three regional K-12 accreditors.

At Senate hearing aimed at states' role in college affordability, Indiana attorney general points finger at feds

Smart Title: 

At Senate hearing, state and federal officials point fingers at one another when discussing who can fix issues of college cost and access.

Lawmakers FAFSA simplification legislation, as HEA reauthorization in Senate heats up

Smart Title: 

Two key senators circulate legislative proposals as the reauthorization of the Higher Education Act heats up in Congress.

Congress hears about the role of accreditation and online partnerships

Smart Title: 

Georgia Tech official describes Udacity partnership on Capitol Hill, provoking back-and-forth about whether accreditation encourages or deters innovation.

Essay on how President Obama's rating system should work

After a month of speculation, President Obama unveiled his plan to “shake up” higher education last week. As promised, the proposal contained some highly controversial elements, none greater than an announcement that the U.S. Department of Education will begin to rate colleges and universities in 2015 and tie financial aid to those results three years later. The announcement prompted typical clichéd Beltway commentary from the higher education industry of “the devil is in the details” and the need to avoid “unintended consequences,” which should rightfully be attributed as, “We are not going to outright object now when everyone’s watching but instead will nitpick to death later.”

But the ratings threat is more substantive than past announcements to put colleges “on notice,” if for no other reason than it is something the department can do without Congressional approval. Though it cannot actually tie aid received directly to these ratings without lawmakers (and the threat to do so would occur after Obama leaves office), the department can send a powerful message both to the higher education community and consumers nationwide by publishing these ratings.

Ratings systems, however, are no easy matter and require lots of choices in their methodologies. With that in mind, here are a few recommendations for how the ratings should work. 

Ratings aren’t rankings.

Colleges have actually rated themselves in various forms for well over a hundred years. The Association of American Universities is an exclusive club of the top research universities that formed in 1900. The more in-depth Carnegie classifications, which group institutions based upon their focus and types of credentials awarded, have been around since the early 1970s. Though they may not be identified as such by most people, they are forms of ratings — recognitions of the distinctions between universities by mission and other factors.

A federal rating system should be constructed similarly. There’s no reason to bother with ordinal rankings like the U.S. News and World Report because distinguishing among a few top colleges is less important than sorting out those that really are worse than others. Groupings that are narrow enough to recognize differences but sufficiently broad to represent a meaningful sample are the way to go. The Department could even consider letting colleges choose their initial groupings, as some already do for the data feedback reports the Department produces through the Integrated Postsecondary Education Data System (IPEDS).

It’s easier to find the bottom tail of the distribution than the middle or top.

There are around 7,000 colleges in this country. Some are fantastic world leaders. Others are unmitigated disasters that should probably be shut down. But the vast majority fall somewhere in between. Sorting out the middle part is probably the hardest element of a ratings system — how do you discern within averageness?

We probably shouldn’t. A ratings system should sort out the worst of the worst by setting minimum performance standards on a few clear measures. It would clearly demonstrate that there is some degree of results so bad thatit  merits being rated poorly. This standard could be excessively, laughably low, like a 10 percent graduation rate. Identifying the worst of the worst would be a huge step forward from what we do now. An ambitious ratings system could do the same thing on the top end using different indicators, setting very high bars that only a tiny handful of colleges would reach, but that’s much harder to get right.

Don’t let calls for the “right” data be an obstructionist tactic.

Hours after the President’s speech, representatives of the higher education lobby stated the administration’s ratings “have an obligation to perfect data.” It’s a reasonable requirement that a rating system not be based only on flawed measures, like holding colleges accountable just  for the completion of first-time, full-time students. But the call for perfect data is a smokescreen for intransigence by setting a nearly unobtainable bar. Even worse, the very people calling for this standard are the same ones representing the institutions that will be the biggest roadblock to obtaining information fulfilling this requirement. Having data demands come from those keeping it hostage creates a perfect opportunity for future vetoes in the name of making perfect be the enemy of the good. It’s also a tried and true tactic from One Dupont Circle. Look at graduation rates, where the higher education lobby is happy to put out reports critiquing their accuracy after getting Congress to enact provisions that banned the creation of better numbers during the last Higher Education Act reauthorization.

To be sure, the Obama administration has an obligation to engage in an open dialogue with willing partners to make a good faith effort at getting the best data possible for its ratings. Some of this will happen anyway thanks to improvements to the department’s IPEDS database. But if colleges are not serious about being partners in the ratings and refuse to contribute the data needed, they should not then turn around and complain about the results.

Stick with real numbers that reflect policy goals.

Input-adjusted metrics are a wonk’s dream. Controlling for factors and running regressions get us all excited. But they’re also useless from a policy implementation standpoint. Complex figures that account for every last difference in institutions will contextualize away all meaningful information until all that remains is a homogenous jumble where everyone looks the same. Controlling for socioeconomic conditions also runs the risk of just inculcating low expectations for students based upon their existing results. Not to mention any modeling choices in an input-adjusted system will add another dimension of criticism to the firestorm that will already surround the measures chosen.

That does not mean context should be ignored. There are just better ways to handle it. First and foremost is making ratings on measures based on performance relative to peers. Well-crafted peer comparisons can accomplish largely the same thing as input adjustment since institutions would be facing similar circumstances, but still rely on straightforward figures. Second, unintended consequences should be addressed by measuring them with additional metrics and clear goals. For example, afraid that focusing on a college's completion rate will discourage enrolling low-income students or unfairly penalize those that serve large numbers of this type of students? The ratings should give institutions credit for the socioeconomic diversity of their student body, require a minimum percentage of Pell students, and break out the completion rate by familial income. Doing so not only provides a backstop against gaming, it also lays out clearer expectations to guide colleges' behavior, something the U.S. News rankings experience has shown that colleges clearly know how to do with less useful measures like alumni giving (sorry, Brown, for holding you back on that one).

Mix factors a college can directly control with ones it cannot.

Institutions have an incentive to improve on measures included in a rating system. But some subset of colleges will also try to evade or “game” the measure. This is particularly true if it’s something under their control — look at the use of forbearances or deferments to avoid sanctions under the cohort default rate. No system will ever be able to fully root out gaming and loopholes, but one way to adjust for them is by complementing measures under a college’s control with ones that are not. For example, concerns about sacrificing academic quality to increase graduation rates could be partially offset by adding a focus on graduates’ earnings or some other post-completion behavior that is not under the college’s control. Institutions will certainly object to being held accountable for things they cannot directly influence. But basing the uncontrollable elements on relative instead of absolute performance should further ameliorate this concern.

Focus on outputs but don’t forget inputs.

Results matter. An institution that cannot graduate its students or avoid saddling them with large loan debts they cannot repay upon completion is not succeeding. But a sole focus on outputs could encourage an institution to avoid serving the neediest students as a way of improving its metrics and undermine the access goals that are an important part of federal education policy.

To account for this, a ratings system should include a few targeted input metrics that reflect larger policy goals like socioeconomic diversity or first-generation college students. Giving colleges “credit” in the ratings for serving the students we care most about will provide at least some check against potential gaming. Even better, some metrics should have a threshold a school has to reach to avoid automatic classification into the lowest rating.

Put it together.

A good ratings system is both consistent and iterative. It keeps the core pieces the same year to year but isn’t too arrogant to include new items and tweak ones that aren’t working. These recommendations present somewhere to start. Group the schools sensibly — maybe even rely on existing classifications like those done by Carnegie. The ratings should establish minimum performance thresholds on the metrics we think are most indicative of an unsuccessful institution — things like completion rates, success with student loans, time to degree, etc. They should consist of outcomes metrics that reflect their missions—such as transfer success for two-year schools, licensure and placement for vocational offerings, earnings, completion and employment for four-year colleges and universities. But they should also have separate metrics to acknowledge policy challenges we care about — success in serving Pell students, the ability to get remedial students college-ready, socioeconomic diversity, etc. — to discourage creaming. The result should be something that reflects values and policy challenges, acknowledges attempts to find workarounds, and refrains from dissolving into wonkiness and theoretical considerations that are divorced from reality.

Author/s: 
Ben Miller
Author's email: 
millerb@newamerica.net

Ben Miller is a senior policy analyst in the New America Foundation's education policy program, where he provides research and analysis on policies related to postsecondary education. Previously, Miller was a senior policy advisor in the Office of Planning, Evaluation, and Policy Development in the U.S. Department of Education.

Obama plan on college costs coming this week

Smart Title: 

This week President Obama will unveil plan to make college affordable, promising tough love for some in higher education "business." But will his proposals go anywhere?

For-profit fights resume with new twists on old debates

Smart Title: 

Washington is gearing up for the next round of fights over for-profit colleges, with student veterans and gainful employment getting top billing.

Pages

Subscribe to RSS - Higher Ed Act Reauthorization
Back to Top