Accreditation

Growing federal role in accreditation will have drawbacks (essay)

For accreditation, 2016 will be remembered as an inflection point, a pivotal moment, a culmination of a multiyear revamping, which means this space is now dominated by two features.

The federal government, through the U.S. Department of Education, has consolidated its authority over accreditation. It is now the major actor directing and leading this work. Second, the public, whether members of the news media, consumer protection advocates, think tanks or employers, is now in agreement that the primary task of accreditation is public accountability. That means accredited status is supposed to be about protecting students -- to serve as a signal that what an institution or program says about itself is reliable, that there are reasonable chances of student success and that students will benefit economically in some way from the educational experience.

Both the strengthened federal oversight and expectations of public accountability have staying power. They are not temporary disruptions. They will remake accreditation for the foreseeable future.

At least some government authority over accreditation and public concern about the space and accountability are not new. What is new and what makes this moment pivotal is the extent to which there is agreement on both the expanded federal role and public accountability. And both are in significant contrast to longstanding practice of accrediting organizations as independent, nongovernmental bodies accustomed to setting their own direction and determining their own accountability.

This disruption can result in serious drawbacks for accreditation and higher education -- and students. Those drawbacks include a loss of responsible independence for both accreditation and the higher education institutions that are accredited. This independence has been essential to the growth and development of U.S. higher education as an outstanding enterprise both when it comes to quality and to access. There are concerns about maintaining academic freedom, so vital to high-quality teaching and research, in the absence of this independence. We have not, in this country, experimented with government and the public determining quality, absent academics themselves. Government calls for standardization in accreditation can, however unintentionally, undermine the valuable variation of types of colleges and universities, reducing options for students.

Consolidation of Federal Oversight

By way of background, “accreditation is broken” has been a federal government mantra for several years now. For the U.S. Congress, both Democrats and Republicans, as well as the executive branch, messages about perceived deficiencies of accreditation have been driving the push for greater government oversight, whether delivered from a secretary of education describing accreditors as “watchdogs that don’t bite” or an under secretary talking about how accreditors are “asleep at the switch” or a senator maintaining that “too often the accreditation means nothing” or a leading House representative saying accreditors may have to change how they operate in the changing landscape of higher education.

Members of Congress, through various hearings, bills and statements, have called for changes that would focus accreditation more on student learning, create an alternative accreditation system or strengthen government oversight of accreditation, especially in relation to protecting students. Yes, some policy makers are concerned about the department going too far. Crucially, however, the debate is not about what is being done -- greater federal oversight and public accountability -- but who should have the authority to act.

Both Congress and the department are pushing accreditation to focus more intently on both the performance of institutions and the achievement of students. From a federal perspective, “quality” is now about higher graduation rates, less student debt and default, better jobs, and decent earnings. The Education Department’s Transparency Agenda, announced last fall, has become a major vehicle to assert this federal authority. The Agenda ties judgment about whether accreditation is effective to graduation and default information, with the department, for the first time, publishing such data arrayed by accreditors and publishing accreditors’ student achievement standards -- or identifying the absence of such standards. The department also is taking steps to move accreditors toward standardizing the language of accreditation, toward more emphasis on quantitative standards and toward greater transparency about accreditation decisions.

Consistent with the Agenda, the National Advisory Committee on Institutional Quality and Integrity (NACIQI), the federal body tasked with recommending to the secretary of education whether accrediting organizations are to be federally recognized, is now including attention to graduation and default rates as part of its periodic recognition reviews. Committee meetings involve more and more calls for judging accrediting organizations’ fitness for federal recognition based less on how these organizations operate and more on how well their accredited institutions and programs are doing when it comes to graduation and debt. And NACIQI has been clear that, because of the importance to the public and to protecting students, all activities of accrediting organizations now need to be part of the committee’s purview.

Most recently, Democratic Senators Elizabeth Warren, Dick Durbin and Brian Schatz introduced a bill on accreditation that would upend the process. The bill captures the major issues and concerns that have been raised by Congress and the department during the past few years, offering remedies driven by expanding federal authority over accreditors and institutions: federally imposed student achievement standards, a federal definition of quality, federal design of how accreditation is to operate and federal requirements that accrediting organizations include considerations of debt, default, affordability and success with Pell Grants as part of their standards. While it is unlikely that anything will happen with this bill during the remainder of the year, it provides a blueprint for change in accreditation for the next Congress and perhaps the foundation for the future reauthorization the Higher Education Act itself.

Moreover, as government plays a more prominent role in accreditation, the process has become important enough to be political. Lawmakers sometimes press individual accrediting organizations to act against specific institutions or to protect certain institutions. Across both the for-profit and nonprofit sectors, lawmakers make their own judgments and are public about whether individual institutions are to have accredited status and how well individual accrediting organizations do their jobs. Now, when accrediting organizations go before NACIQI, not only are they concerned about meeting federal law and regulation, but they are also focused on the politics around any of their institutions or programs.

In short, the shift in Washington -- defining quality expectations for accreditors in contrast to accepting how accreditors define quality, intensive and extensive managing of how accreditors are carrying out their work in contrast to leaving this management to the accreditors, seeking to standardize accreditation practice in contrast to the variation in practice that comes with a decentralized accreditation world of 85 different accrediting organizations -- has placed the federal government in a strong oversight role. There is bipartisan support in Congress and across branches of government for this rearrangement of the accreditation space. It is difficult to imagine that the extent to which the federal government influences the activity and direction of accreditation will diminish any time soon, if at all.

Consolidation of Public Expectations

The pressure on accreditation for public accountability has significant staying power in a climate where higher education is both essential and, for many, expensive, even with federal and state assistance. There is a sense of urgency surrounding the need for at least some higher education for individual economic and social well-being as well as the future competitiveness and capacity of society. At the same time, disturbingly, student loan debt now totals more than $1.3 trillion, and in 2016 the average graduate who took out loans to complete a bachelor’s degree owed more than $37,000. In this environment, the public wants accreditation to focus on students gaining a quality education at a manageable financial level.

Accreditation is now the public’s business. On a weekly basis, multiple articles on accreditation appear in the news media, nationally and internationally. Social media reflect this as well, with any article about accreditation, but especially negative news, engaging large numbers of people in a very short period of time. Think tank reports on accreditation are increasing in number, mostly focused on how it needs to change.

From all sources, the focus is on accreditation and whether it is a reliable source of public accountability. Media attention is on default rates as too high and graduation rates as too low, on repeated expressions of employer dissatisfaction with employees’ skills and whether accredited institutions do a good job of preparing workers. In the face of a constant stream of articles highlighting these concerns, the public increasingly questions what accreditation accomplishes and, in particular, whether it is publicly accountable.

Moreover, where judgments about academic quality were once left to accreditors and institutions, technology now enables the news media and the public to make such judgments on their own. Enormous amounts of data on colleges and universities are readily available, from graduation rates to attrition, retention and transfer rates. Multiple data sources such as the federal government’s College Scorecard, College Navigator and Education Trust’s College Results Online are now available to be used by students, families, employers and journalists. Urgency, concern and widespread opportunity to make one’s own judgment about quality have all coalesced to raise questions about why any reliance on accreditation is needed, unless accreditation carries out this public accountability role. Perhaps the most striking example of this development is Google’s recent announcement that it is working with the College Scorecard to present Scorecard data (e.g., graduation rates, earnings, tuition) as part of a display when people search for a particular college or university.

What’s Next?

This, then, is the revamped accreditation space, with the federal government determining the direction of accreditation and a public that is driving accreditation into a predominantly public accountability role.

Will this revamping be successful? Will students be better served? Only if government, the public, higher education and accreditation can strike a balance. Expanded government oversight should be accompanied by acknowledging and respecting the independence, academic judgment and academic leadership long provided by colleges and universities and central to effective higher education and accreditation. Emphasis on public accountability should be accompanied by valuing the role of academics in determining quality. By and large, this has been accomplished through the relationship between accreditation, higher education and government until recently. The way forward needs this same balance.

Judith S. Eaton is president of the Council for Higher Education Accreditation, a membership association of 3,000 degree-granting colleges and universities.

Image Caption: 
Packed room during June meeting of federal panel that oversees accreditors
Is this diversity newsletter?: 

Group releases draft quality standards for competency-based education

Smart Title: 

Group of colleges releases voluntarily standards for competency-based education, which Education Department official says could help prevent the rise of bad actors.

Indiana creates student 'value index' while support builds for a federal student data system

Smart Title: 

While political support in Washington builds slowly for a federal student record database, Indiana and the University of Texas System get creative with their own data on how students fare after college.

Hundreds of colleges, many for-profits, seek a new accreditor

Smart Title: 

National accreditor ACCSC gets inquiries from nearly 300 colleges overseen by ACICS, most of them for-profits. Critics of accreditors will be watching as the agency reviews the flood of applications.

ABA is taken to task by feds and critics on law school student outcomes

Smart Title: 

The American Bar Association takes a hard line on two law schools' admissions standards amid criticism that the group's accrediting arm is not doing enough to help struggling law-school graduates.

Iowa regulator agreed with Ashford University's complaint about meddling by federal and California agencies

Smart Title: 

Ashford University cries foul on veterans' agency and California for meddling in Iowa's decision to yank the for-profit's GI Bill eligibility, and newly released emails show an Iowa official shared that view.

Report on for-profits in six countries finds similar problems and few benefits

Smart Title: 

U.K. report on for-profit colleges in six countries finds few benefits of sector and calls for tighter regulation, while acknowledging lack of data makes it hard to set rules.

IT think tank's call for alternative forms of credentialing and measuring competency

Smart Title: 

Technology think tank says standardized testing by outside groups and alternative forms of credentialing could create helpful competitive pressure on higher education and the traditional college degree.

WICHE's interstate passport seeks to help students transfer while preserving colleges' autonomy

Smart Title: 

New interstate network seeks to help students transfer across state lines without losing credits, but also defers to faculty members at each college about how to measure learning.

A new way to improve the available data on student success (essay)

A national outcry regarding the cost of education and the poor performance of institutions in graduating their students has raised questions about the extent to which accreditors are fulfilling their mission of quality assurance. Politicians have expressed outrage, for instance, at the fact that accreditors are not shutting down institutions with graduation rates in the single digits.

At the same time, accreditors and others have noted that the graduation data available from the National Center for Education Statistics’ Integrated Postsecondary Education Data System, familiarly known as IPEDS, include only first-time, full-time student cohorts and, as such, are too limited to be the measure by which institutional success is measured -- or by which accreditation is judged. But simply noting this problem does nothing to solve it. The imperative and challenge of getting reliable data on student success must be more broadly acknowledged and acted upon. The WASC Senior College and University Commission (WSCUC) has taken important steps to do just that.

As is well known, IPEDS graduation rates include only those students who enrolled as first-time, full-time students at an institution. Of the approximately 900,000 undergraduate students enrolled at institutions accredited by WSCUC, about 40 percent, or 360,000, fit this category. That means approximately 540,000 students in this region, including all transfer and part-time students, are unaccounted for by IPEDS graduation rate data.

The National Student Clearinghouse provides more helpful data regarding student success: while including full-time student cohorts, part-time students are also considered, as well as students who combine the two modes, and data include information on students who are still enrolled, have transferred and are continuing their studies elsewhere or have graduated elsewhere. Six-year student outcomes, however, are still the norm.

Since 2013, WSCUC has worked with a tool developed by one of us -- John Etchemendy, provost at Stanford University and a WSCUC commissioner -- that allows an institution and our commission to get a fuller and more inclusive picture of student completion. That tool, the graduation rate dashboard, takes into account all students who receive an undergraduate degree from an institution, regardless of how they matriculate (first time or transfer) or enroll (full time or part time). It is a rich source of information, enabling institutions to identify enrollment, retention and graduation patterns of all undergraduate students and to see how those patterns are interrelated -- potentially leading to identifying and resolving issues that may be impeding student success.

Here’s how it works.

WSCUC collects six data points from institutions via our annual report, the baseline data tracked for all accredited, candidate and eligible institutions and referenced by WSCUC staff, peer evaluators and the commission during every accreditation review. On the basis of those data points, we calculate two completion measures: the unit redemption rate and the absolute graduation rate. The unit redemption rate is the proportion of units granted by an institution that are eventually “redeemed” for a degree from that institution. The absolute graduation rate is the proportion of students entering an institution who eventually -- a key word -- graduate from that institution.

The idea of the unit redemption rate is easy to understand. Ideally, every unit granted by an institution ultimately results in a degree (or certificate). Of course, no institution actually achieves this ideal, since students who drop out never “redeem” the units they take while enrolled, resulting in a URR below 100 percent. So the URR is an alternative way to measure completion, somewhat different from the graduation rate, since it counts units rather than students. But most important, it counts units that all students -- full time and part time, first time and transfer -- take and redeem.

Interestingly, using one additional data point (the average number of units taken by students who drop out), we can convert the URR into a graduation measure, the absolute graduation rate, which estimates the proportion of students entering a college or university (whether first time or transfer) who eventually graduate. Given the relationship between annual enrollment, numbers of units taken in a given year and the length of time it takes students to complete their degrees -- all of which vary -- the absolute graduation rate is presented as an average over eight years. While not an exact measure, it can be a useful one, especially when used alongside IPEDS data to get a more nuanced and complete picture of student success at an institution.

What is the advantage to using this tool? For an institution like Stanford -- where enrollments are relatively steady and the overwhelming majority of students enter as first-time, full-time students and then graduate in four years -- there is little advantage. In fact, IPEDS data and dashboard data look very similar for that type of institution: students enter, take roughly 180 quarter credits for an undergraduate degree and redeem all or nearly all of them for a degree in four years. For an institution serving a large transfer and/or part-time population, however, the dashboard can provide a fuller picture than ever before of student success. One of our region’s large public universities has a 2015 IPEDS six-year graduation rate of 30 percent, for example, while its absolute graduation rate for the year was 61 percent.

What accounts for such large discrepancies? For many WSCUC institutions, the IPEDS graduation rate takes into account fewer than 20 percent of the students who actually graduate. The California State University system, for example, enrolls large numbers of students who transfer from community colleges and other institutions. Those students are counted in the absolute graduation rate, but not in the IPEDS six-year rate.

As the dashboard includes IPEDS graduation rate data as well as the percentage of students included in the first-time, full-time cohort, it makes it possible to get a better picture of an institution’s student population as well as the extent to which IPEDS data are more or less reliable as indicators of student success at that institution.

Here’s an example: over the years between 2006 and 2013, at California State University Dominguez Hills, the IPEDS six-year graduation rate ranged between 24 percent and 35 percent. Those numbers, however, reflect only a small percentage of the university’s student population. The low of 24 percent in 2011 reflected only 7 percent of its students; the high of 35 percent in 2009 reflected just 14 percent. The eight-year IPEDS total over those years, reflecting 10 percent of the student population, was 30 percent.

In contrast, looking at undergraduate student completion using the dashboard, we see an absolute graduation rate of 61 percent -- double the IPEDS calculation. Clearly, the dashboard gives us a significantly different picture of student completion at that institution.

And there’s more. To complement our work with the dashboard, WSCUC staff members have begun work on triangulating dashboard data with data from the National Student Clearinghouse and IPEDS to look at student success from various angles. We recognize that all three of these tools have limitations and drawbacks as well as advantages: we’ve already noted the limitations of the IPEDS and National Student Clearinghouse data, as well as the benefit of the inclusion in the latter’s data of transfer students and students still enrolled after the six-year period. In addition, the data from both IPEDS and the clearinghouse can be disaggregated by student subpopulations of gender and ethnicity, as well as by institution type, which can be very beneficial in evaluating institutional effectiveness in supporting student success.

Pilot work has been done to plot an institution’s IPEDS and dashboard data in relation to the clearinghouse data, displayed as a box-and-whisker graph that provides the distribution of graduation rates regionally by quartile in order to give an indication of an institution’s success in graduating its students relative to peer institutions within the region. While care must be taken to understand and interpret the information provided through these data, we do believe that bringing them together in this way can be a powerful source of self-analysis, which can lead to institutional initiatives to improve student completion.

As noted, WSCUC has been working with the dashboard since 2013. While we are excited and encouraged regarding the benefits of the tool in providing a more complete and nuanced picture of student success, we also recognize that we have a great deal of work ahead of us to make the tool as useful as we believe it can be. After two pilot projects including a limited number of WSCUC-accredited institutions, the required collection of data by all WSCUC colleges and universities in 2015 revealed a number of challenges to institutions in submitting the correct data. The dashboard can be somewhat difficult to understand, especially for institutions with large shifts in enrollment patterns. And unlike National Student Clearinghouse data, dashboard data, at least at this point, cannot be disaggregated to reveal patterns of completion for various student subpopulations.

Such issues notwithstanding, we are encouraged by the value of the dashboard that we have seen to date and are committed to continuing to refine this tool. WSCUC staff members have given presentations both regionally and nationally on the dashboard, including one to IPEDS trainers to show them the possibilities of this tool to extend the data available nationally regarding student completion.

We are hopeful that other accreditors and possibly the NCES will find the dashboard a useful tool and, if so, adopt it as an additional completion measure for institutions across the country. In any case, we will continue to do this work regionally so as to not just complain about the available data but to also contribute to their improvement and usefulness.

Mary Ellen Petrisko is president of the WASC Senior College and University Commission. John Etchemendy is provost of Stanford University.

Editorial Tags: 
Image Source: 
iStock

Pages

Subscribe to RSS - Accreditation
Back to Top